Notes on usability and related things by a project manager who manages electronic publishing projects.

November 30, 2012

Its amazing how often new software bugs are reported when you introduce new testers to the testing process. Its quite likely that some of these are not new breakages in the latest software release. What is happening is usually that the new guys:

read the test script differently. It is very difficult to write unambiguous instructions, and it would be unconventional to test the test script on a lot of testers until any editorial problems were fixed - usually that would be a poor use of resources. Or;

they read it carefully and take it literally. By contrast, your established testers have long got used to skimming the instructions to remind themselves what to do, but rely mostly on their experience and memory of previous tests. It's the trade-off you make for their faster progress through the tests. It's very difficult to make yourself read a long, boring test script for the umpteenth time as if you'd never seen it before (at least, I find this so). In this case the testers, new and old, are making an interesting kind of error (if it is an error)- a Goldovsky error

This phenomenon of new tester-related bug reports can go on for a long time if the test script is long and complicated or gets updated over time (or both).

Usually this is seen as a nuisance - after some effort, it's discovered that the bugs aren't real things going wrong with the software, they are artifacts of the test case and tester. It's easy to feel that the tester (or the test script author) "should have got it right in the first place". That would be a fair criticism if lots of time and resources were allowed to get the test script exactly right and train up the testers. But how often do you see that? More usually the team decided on a trade-off (faster preparation for testing in return for lower-quality script) and now someone is moaning about the downside of it. Or, we didn't decide on a trade-off as such, we just didn't allow enough time to prepare the script, and the test-script errors coming through are how we are learning this uncomfortable truth.

You could see Goldovsky errors as a blessing. Just occasionally the new guy discovers something that is important and was overlooked by the specific way things were done before. You could argue that, for best bug discovery, you ought to rotate people on and off the team of testers to take advantage of this. Hmm, I'm not sure. On long projects, or systems being regression tested each release, staff turnover or other change tends to make this happen anyway without having to do it as a matter of policy.

Conversely, training your testers extensively teaches them to use the system just like you do and risks missing out on all the unexpected and creative things people do when they work from instructions, and the discoveries that might come from that.

Anyway, your customers don't have the benefit of a script, probably won't read the instructions and will certainly do a whole pile of things you won't ever think of until you either do usability tests or launch the product and get customer feedback.

October 27, 2009

UXmatters have a well-argued discussion of the pros and cons of checking your design by user testing, as opposed to having an expert do a review. These methods achieve different things. For example, suppose you have a design including green navigation tabs, with a red colour being used to show highlighting. A usability reviewer should immediately point out to you that this design is not usable to anyone who is red-green colourblind, which is a point you might miss if you tested with real users, and none of them happened to be colourblind. On the other hand, expert reviewers can suffer from their own biases about how things ought to be done. The ideal is to do both - review the design from a usability point of view yourself (bringing in an expert if needed) and then try it on real customers. Real customers are the only way to get the authentic - and sometimes unexpected - voice of the customer. Among the things real users can do for you is to help you explore whether you've designed workflows in the way that the users (or most of them) expect.

June 18, 2009

When developing websites, it is useful to know which browsers your customers will be using. The best guide is your own usage stats, but if you don't have those, or want to see how your customers compare with a wider sample, StatCounter publish figures based on "aggregate data collected by StatCounter on a sample exceeding 4 billion
pageviews per month collected from across the StatCounter network of
more than 3 million websites." They have a nice graphing tool you can use to look at the browser war in different areas and over different time periods.

Worldwide, 19 May- 17 June Statcounter has IE7 still ahead (just under 32%) with Firefox 3 close behind.

Firefox is ahead of IE7 in Europe (36% to IE7's 27%):IE7 has a bigger share in North America (38% to 28%)IE8 has yet to get above 10% (has just reached thsi in North America, yet to get to 8% in Europe)

November 27, 2008

Updating the flash player does not seem to clean out all files to do with older versions. Maybe that is a bad thing, given that there are some nasty exploits of old flash players. Also, sometimes I want to clean out a machine and put a certain flash player on it for testing.

Older versions of flash player are available from Adobe's flash player archive .You might want and older version for testing - it is not a good idea to use an obsolete player for general use, because of the danger of it being exploited.

June 23, 2008

Since composing my last post on wireframes I came across a couple of articles on the subject which reinforce the point of needing to keep wireframes simple - in terms of what they are for as well as how they look.

Sarah Harrison states the problem nicely:

"Standard wireframe documents look so much like a web page layout, we
ask viewers to use immense amounts of imagination to divorce that which
the wireframe is trying to communicate from what its visual
representation is communicating....he main problem with wireframes is when they try to do too much,
serving multiple purposes at the same time. The key, in my opinion, is
to decide what the essential purpose is for your wireframe documents.
Different purposes might require a different format."

Dan Brown (who goes on to suggest one possible solution) has had this problem:

"The conflict arose after clients had seen the wireframes. The layout,
even explicitly caveated, would set their expectations, and they did
not appreciate screen designs that strayed too far from them, no matter
how carefully crafted. Clients also struggled to talk about information
priorities, taxonomies, and functionality. Placing these concepts in a
layout made them more accessible, but our conversations were too
tactical, and their feedback had more to do with design than with
structure."

To me, these interesting articles reinforce the need to think ahead about the process within which the wireframing sits That should help to keep the wireframes as a quick, disposable tool to help with the next task in hand - I don't think the one wireframe can cover all the aspects of logic, layout, emphasis and so on without losing the quick-and-cheap benefits that wireframes ought to have.

November 28, 2007

I received a useful comment from reader "Arium" on my post "Tabs, used right". Arium was helpfully pointing me to some interesting research from ClickTale on whether people scroll down past "the fold" (the point where a long web page runs off the bottom of the screen).

76% of the page-views with a scroll-bar, were scrolled to some extent.

22% of the page-views with a scroll-bar, were scrolled all the way to the bottom.

If this sample is representative, there's a one-in-five (roughly) chance of stuff down the bottom gets read. Not great, but maybe not-so-dissimilar from the chances of lesser information if it were put on a succession of short pages rather than one long scrolling one. This is interesting given the received wisdom that stuff below the fold won't get read.

September 14, 2006

Since I posted yesterday's item about caching problems, I have had a interesting suggestion: it might be possible to get around this by requesting pages though a web proxy server. Such services exist on the web for free (you might have some adverts floated on top of your page) - they are largely aimed at peole who want to browse anonymously, but should mean that my ISP can't tell that I am once more after a page they have cached.

This wikipedia article about proxy servers includes links to such services. Do also read the bits of the article about abuse: if you go via a proxy server you cannot tell who might be intercepting your data, so it might be unwise to use this if you will be inputting personal details such as passwords

Note added 19 Sept: another tip I have received is that it may be possible to order your ISP to flush its cache. You can do this from the Run prompt in he Start menu of Windows ( go Start >> Run >> and then type ipconfig /flushdns in the Run input box). Microsoft have a note about this procedure. And here is some stuff about other ipconfig commands

September 13, 2006

I just became a victim of a web site testing "gotcha" that I have not seen before - maybe it is worth making this more widely known. Or at least you can sympathise/enjoy the joke at my expense.

A new website is just about to go live. Yesterday I emailed the developer about one last mistake. Time is of the essence now, so she emails me back quickly saying she has fixed it, and could I please check it so that she can go live.

So I clear my cache and go and look. No, it is not fixed. I email the developer, and also CC a colleague in head office. The head office colleague then phones me to say that when he looks at the page, the mistake IS fixed. We try to work out how we could be seeing different versions of the same page:

He checks that his cache is cleared too.

I check on another computer on my network - and I still see the page with a mistake in it.

He asks a colleague to look at the site - she sees the corrected page.

We discuss how we woudl understand it if I could see the fixed page and he could not (as opposed to the other way around) - we'd assume that he is looking at a page cached on his company's server (my small network is simply a wireless router on the end of an NTL cable modem). Boy are we close here, but we don't quite realise the nature of the problem at this point.

We decide that it would be useful to ask someone else to look at the page over the Internet without having to go via the company server. Colleague's wife is phoned and obliges. She can see the corrected page, and has also seen this problem before. What we have overlooked is that my ISP may have been caching pages on their servers.

Ah ha! (or possibly Doh!). I experiment by viewing the page over my dial-up connection (which is a different ISP). And horray, I can see the corrected page and we feel it is OK to go live now.

Phew. I have not seen that one before.

What do you think - is this a useful one to bear in mind, or does "every fule no" this (except me).

As far as I know there is no way I can do anything about what my ISP caches and how long they keep it, I just have to wait for them to reload the page, or rely on having a second, backup ISP (which I like to do anyway so that I am not completely cut off from the Internet if my primary ISP goes down).

June 09, 2006

In a couple of projects recently, I have had a chance to see the value of viewing sites over a computer that does not access the web via the company servers. In one project problems with the company's proxy server made a site under development seem very slow. In another case, we set up an email address for customers to leave feedback. This actually comes to me (at least for now). Test emails originating from the developer reached me OK, but when I tested the email account myself, it didn't work. It turned out that successful delivery of the email relied on the mailing computer knowing an alias for my proper email address. So no problems from within the company, doesn't work for the rest of the world.

So it is well worth the project team experiencing the site from outside the company infrastructure - visiting it from home would be fine.