Creative Commons

Updates from May, 2015

May 30th, 2015

In any software project, we need to understand the people who will use the product. There are many different points of view onto what approach we should take.

Spool: Vision, feedback, culture

Some promote approaches to usability practice alternative to User Centered Design. Spool (2008, 2009) calls the attention of the usability community to the risk of accepting methodologies blindly, of following them as dogma. Instead of measuring the results, he says, usability practitioners tend to focus on convincing people: “We are following the right methodology, so the product must be good”.

Instead of UCD, which he argues never worked, but leads to people following the process blindly, he encourages what he calls informed design. This is a reward system for any design activity such that the effectiveness of the activity can be measured.

Spool states having researched successful and unsuccessful interaction design projects. He claims having found three quality attributes successful projects had in common: a long-term user experience vision, a working feedback loop, and a culture of encouraging design failure.

“Vision : Can everyone on the team describe what the experience of using your design will be like five years from now?” (Spool, 2008, 2009)

The point here is that you have a larger goal, and when you take daily baby steps in your design, you can tell whether those steps are towards what you’re aiming, or not. This seems to also facilitate consistency, as everyone in the team are supposedly aiming for the same, shared goal.

“Feedback : In the last six weeks, have you spent more than two hours watching someone use yours or a competitor’s design?” (Spool, 2008, 2009)

The usefulness of feedback from actual users is rather obvious. Spool seems to stress the importance that everyone in the team have the experience of seeing real users use your design – supposedly, in order to remember just how differently real users perceived your design than you did.

The reason Spool argues for six weeks as the maximum amount of time one should be allowed to not sympathize with users, is that according to him, after that time memories fade. In an online, distributed development environment, we want to regularly expose developers to perceptions about how users experience their designs.

“Culture : In the last six weeks, have you rewarded a team member for creating a major design failure?” (Spool, 2008, 2009)

Here, Spool argues that encouraging failure results in a culture of learning from that failure, instead of making contributors ashamed or afraid of mistakes. This also seems to facilitate creativity and trying also experimental design ideas.

Regardless of the validity of Spool’s criticism against UCD, his propositions for things to include when designing for our actual users look useful, because they can be adopted seemingly without burdening the team with a heavyweight process.

Open source, for example, has been considered loosely a form of distributed participatory design (Barcellini et al., 2008; Terry et al., 2010) . Participatory design is concerned about democracy at work and about democracy in design. In its roots then, its approach to design is quite similar to the ideals of open source (Nichols & Twidale, 2003), where transparency of the development process is a key value.

Participatory design

Contrasting participatory design with UCD where the focus is on researching the users so that designers can make better design decisions, participatory design seeks to empower users as decision makers in the design process itself.

Researcher-designers are thus expected to facilitate user-designer cooperation instead of representing the users (Iivari, 2009) . Ehn (1993) approaches the question of communicating understanding about the intricacies of work, between designers and users, in terms of wittgensteinian language-games:

“If designers and users share the same form of life, it should be possible to overcome the gap between the different language-games [of users and designers]. It should, at least in principle, be possible to develop the practice of design to the point where there is enough family resemblance between a specific language-game of the users and the language-games in which the designers of the computer application are intervening. A mediation should be possible.” (Ehn, 1993, p. 70)

Within the domain of participatory design there are techniques, supposedly helping both users and designers see “other person as partner” instead of “other person as problem”, recognizing the differences in perspective of different stakeholders (Allen et al., 1993). Also in the context of an open source project, users and developers have different language-games. Participatory design explores the conception that making mediation possible between those language-games facilitates design and user participation through enhanced communication.

Erickson (1996) discusses telling stories, considered perhaps an initial, less formal alternative to scenarios , as a similar “equalizer” of different participants of discussion, since storytelling requires no special skills. This comes very close to the challenge of involving users in development that seems to have a culmination point in open source development.

An open source community indeed engages users at a level of discussion. Observation of users in their real working environment is difficult – and sharing those observations in a way that engages the community (such as a video narrative illustrating users’ environments) raises privacy issues – and again, is very time-consuming. Thus, the notion of a solution encouraging users to further participate in the design instead of being observed is a very tempting one.

However, Iivari (2009) states that in the OSS project she studied, users do not have actual decision making power regarding the OSS but are left in a consultative role. This seems to be true of most OSS projects. Ultimate decision making is left to developers only, to whom a very limited amount of understanding about the users of the software is available in the forums of the community (Iivari, 2009).

In OSS projects, user involvement is thin from the perspectives of both UCD (no usability practitioners to represent users) and participatory design (the few users who do participate have no true power over the eventual software). Warsta & Abrahamsson (2003) have studied the similarities of agile methodologies and open source development. Sy (2007) builds a case for agile as an efficient framework in which to employ user-centered design.

Further research could reveal whether these approaches might also be applicable to open source development. So central questions, yet ones without clear answers, seem to include:

Is there a way UCD understanding could enrich the natural way OSS projects learn about their users?

Could usability practitioners plug in to the community’s existing means of communication, finding fruitful points of contact with the community while using methods favoured by UCD, such as ethnographies, to fuel design?

Or should we perhaps concentrate on what seems to come naturally to an OSS community, and only refine user research methods to be based on the open source community’s feedback?

December 20th, 2011

At LUNS Limited they’ve collapsed the Moodle form fieldsets that only contain optional items, in Moodle forms. Without having seen any usability test results or knowing whether they exist, it does seem like an elegant solution at first glance! (Discussion)

They’ve also used the example of the Quiz Add question dialog (tracker item) we did with Tim for allowing people to add activity modules on the course front page. Originally I actually found this UI pattern in QuestionMark during the user research sessions done for the Quiz UI redesign project – great to see it being put to good use.

This should make adding activities more straightforward. Yay!

(Thanks to Helen Foster for the screenshot and to Mary Cooch for the screencast)

Summary of reactions: >60 Twitter tweets total including all links, grade Eximia Cum Laude Approbatur, honorary mention in the thesis competition (link in Finnish) of ACM SIGCHI Finland. The work continues!

Update (Nov. 18 2011): This thesis won an honorary mention in the thesiscompetition (links in Finnish) of the Finnish chapter of SIGCHI ! Yay! Their statement of the thesis: “The jury thought this as a new type of thesis work, which successfully captures the phases and challenges in a multi-phased process of redesigning a Moodle community application. Open source communities have been little investigated from the HCI point of view, and the author successfully opens interesting new viewpoints with the thesis. The constructive Pro Gradu thesis has also resulted a tangible contribution.”

Free/Libre Open Source Software (FLOSS) development has become an important way of producing software in the modern society. In principle, the source code produced as OSS is openly designed, developed and distributed, and developers take part in the process voluntarily. The resulting code is freely or with little cost available to end-users. Often the software developers and users are from all over the globe, with the OSS community applying virtual forums for questions and user feedback and support.

Taking part in OSS projects often poses challenges and obstacles to the usability practitioner whose main interest is to design the user interface so that it better fits the user needs. This is the topic Olli Savolainen deals with in his thesis. He reports on his personal motivation and continuous interest in improving the quality and, in particular, usability of Moodle Quiz. He also refers to his efforts and perseverance in gaining acceptance in the community before the changes he suggested after several iterations finally got accepted into the code base of Moodle 2.0. The description of the project is given on two levels. While reporting on the actual user centered design work done in the various phases of the project, another, more personal account of the challenges encountered on the way and reactions to them is unfolded. This kind of reflection is very valuable for understanding the norms, values and ways of working in FLOSS communities. These are important for gaining acceptance and recognition as an active FLOSS participant.

The thesis is a well balanced and reflective document of things learned and practiced in the Quiz UI project as well as thinking about them in the larger framework of OSS development projects as described in literature. The background literature cited is extensive, ranging from books and journal & conference papers to blog and discussion forum entries and documentation. Furthermore, it is well utilized throughout the thesis.

The vocabulary in the thesis is versatile and the language in general grammatically correct, though professional proof reading and language checking might still improve it. A minor drawback in the thesis is the structure that promotes the feeling of repetition, since some issues are first introduced in Chapter 2, but discussed in more detail in Chapters 7 and 8 with many cross-references between the sections. However, this is only a mark of thoroughness and consistency in reporting.

Olli Savolainen has been involved with Moodle and the Quiz UI for more than three years, and his skills and expertise are apparent in the thesis. The main findings are based on personal work experience, and they smooth the usability practitioners’ path into OSS communities. The thesis work is relevant to future OSS development practitioners. It unites the fields of software engineering and usability engineering, bridging the gap still observed in computer science education.

The work carried out by Olli Savolainen clearly fulfills the standards set for a thesis in Interactive Technology. We propose that the thesis is accepted with the grade eximia cum laude approbatur.

At the department of Computer Sciences, September 9, 2010
Saila Ovaska
Eleni Berki

April 9th, 2010

Robert Martin spoke charismatically about test driven development in last year’s RailsConf. This totally saved my day today.

Why? Because the guy promotes the idea of having tests and running them all the time to prevent your code from becoming an enormous, unholy mess. Because when you have tests, you are not afraid of making changes. (In fact, you are effectively improving the user experience of programming1.) You can play all you want, because you know exactly when anything in your code breaks as a result of you changing the code.

Guess what? It applies to usability, too. Three points:

What is a test? Essentially, it embodies what *should* happen. If, after having changed your code, that something doesn’t happen, you know you are in trouble.

Likewise, When you test usability, you expect the person looking at your UI to do something. When they don’t, you know you have two options: go back to the original, or make it better.

Debugging. When you test code, you may spot that oh, there is an error: the test didn’t pass. Ideally, debugging is so built into the process that you don’t really think of it as separate: since you have tests for every little part of the program, the bug may be pretty easy to spot.

When you usability test, when you notice an issue (and try to keep your calm so the test participant does not notice your frustration) and if your test participant is talking out loud like you have told them to, you learn the reason there and then, and the solution is often more or less obvious.

Lastly, tests are not something that lead to great design. Your code may still suck, but at least you have the courage to improve it since you can test whether your new fix breaks anything else. The important thing is that the tests exist so you can rely on them.

The same thing with usability. A test does not do anything for the design in itself. If you have failed to understand the user’s goals in the first place, a usability test will only show you how the user gets confused while doing the wrong thing in the first place, or while doing it in an unrealistic setting2.

However, a test does describe what the UI was designed to do. When you have comprehensive usability tests for a UI, you can use those test tasks against any new version of the UI, and see if it still serves the purpose it was originally created for, and how well.

In ways, usability testing is just like unit testing: When you have tests defined and you regularly run them alongside development, you know your stuff is good.

If you don’t run them, you don’t know. More likely than not, what you create just does not hold together.

February 13th, 2010

After the last iMoot session I had, I was chatting with Silvia Calvet about usability and its social nature.1

During the iMoot conference I also got a couple of precious chances to hear about different community members’ usability efforts within their organisations. Turns out there are a bunch of people already doing usability related to Moodle, mostly inside their organizations. Even more people are interested, but the environment to discuss usability in the context of Moodle does not exist.

We need an environment where community members can make their usability efforts visible to the rest of the community:

A place where people are encouraged to brag about what they have done for usability in their organisation, and to share what has been learned (usability test tasks, results, …).

A site that could propose you directions to take with usability: give instructions interactively for usability testing, for instance.

A corner in the community to chat in about usability, where you could share your frustrations with Moodle, and with doing usability work, and with usability issues in general.

Another aspect of this effort would be to visualize actual concrete usability data about Moodle.

Have a hierarchy on the site for the high level to low level goals, and for red routes of the Moodle UI (of course, these need to be defined first).

Allow people to link user interfaces (cvs/git), tracker issues and usability tests (containing test tasks and results) into these goals, although keeping the user goals as the starting point for everything.

The slogan? How about… We are all about the goals of the learners!

The magic I want to make happen: make usability visible socially since it is, at its foundation, a phenomenon of social artifacts. Engineers are creating artifacts to users, when they would be better off with other kinds of artifacts. If we can make this disrepancy obvious in the community’s social sphere, there would perhaps no longer be a need to try to convince software engineers of the concrete need for the work. As it is currently, it seems many perceive usability as something too abstract and distant for them to actually do something about it.

Before any of this though, I believe the first milestone is to do usability testing to determine the current level of usability. This is to set measurable goals for Moodle usability, and to prioritize the things a given part of Moodle is primarily intended for. The fun thing about the above vision is that it is easy to start small: first start filling in data for one activity module while it is usability tested, and then build on the vision I am proposing here, as we go. Even if we end up just creating a site for documenting Moodle’s current level of usability (as a side effect of doing usability testing), it is still worth it.

Silvia is someone who has since summer encouraged me in work with Moodle in a great deal. She is working for CVA, a Moodle partner, herself and we also met in EuroIA09 in Copenhagen to discuss where to take Moodle’s usability efforts. [↩]

February 7th, 2010

Preparing for presenting in iMoot, an online conference about online learning and Moodle, was an intensive process for me, but it paid off – both in terms of learning while designing it, and because many of the presentations were really inspiring.

August 19th, 2009

Note: The below test results require understanding about the UIs in question. Follow the links in the beginning to have an idea what the UIs in question are like.

I had eight test subjects during three days usability tests last week. The tests took about 15 minutes each, but gave plenty of data. The main conclusions:

Uploading images to the rich text editor in Moodle 2.0 has too many steps and they are well hidden. All 8 of the users really struggled in one or more of the steps (different users in different ones). If it weren’t for a test situation where users typically try more (since they assume the task is possible to do and there is social pressure), even more of them would have likely failed the task.

The old ‘forgotten password’ form failed or caused struggles in 4 out of 5 tests. The new form caused no confusion in any of the three tests. (I intended to have an equal number of tests for both. This was a mistake on my part.)

This is discussed in the tracker item MDL-16597, along with proposed solutions (Updated September 2nd).

File Upload in the Rich Text Editor

In the foreseen Moodle 2.0, it is possible to upload images whenever there is a rich text field. Except for the [Choose…] button in the Insert/Edit Image dialog, this diagram/mockup is a very close image of the functionality in Moodle 2.0 HEAD I tested, although there were only three sections int

Getting from the text editor to the dialog where you can select which image to upload takes five clicks. Users got lost in all four steps except the last one (pressing the “Browse…” button), most of them in several steps.

1. Get to the add image dialog (4 users found toolbar button, 1 found the item in the right click menu for opening the dialog)

Failure: 2 subjects (needed my help to proceed)

Struggles: 3 subjects (searched how to do it for several minutes, trying clipboard and drag&drop etc. but continued on their own)

Passed this step without struggles: 3 subjects

2. Click the ‘browse’ button (URL is a technical abbreviation and got many users lost; some complained that they do not know what it means; one user complained that URLs have nothing to do with uploading an image from the local computer and was confused due to that. Some users wrote the image name in the description/title field or both; they did not understand the difference between the fields)

Failure: 2 (1 needed my help to proceed; 1 gave up at this point)

Struggle: 2

Passed this step: 4

3. Click “Upload a file” (after this clicking “Browse…” and finding desktop where the file was, was quick)

Failures: 0

Struggles: 4 (In general, users assumed ‘Local files’ to mean the local computer and first thought they should look there. Thus they were not looking for upload anymore – apparently they assumed they were uploading already.)

Passed this step: 3

4. Press “Browse…”

Passed this step: 7

Notes:

One or two of the users did not recognize the rectangle under the file upload field as a button so got stuck for a moment there.

One user did not understand they still had to click the Insert button in the first dialog to actually get the image in the document.

Users ignored “Current files” section, some of them wondered what that is

Forgotten password form

The old form was confirmed misleading since it made 80% of the subjects fill both of the fields. Some of the subjects still did not understand what to do after the form gave the error message to fill one field or the other – apparently the visual (erroneous) message given by the form was a stronger clue to them. What the original forum thread reported was clearly right. I am really surprised such an issue has not been spotted before.

When it was too late, I noticed the WordPress people have designed it even better, though than I did. How bitter: I did not believe it when someone told me having a single field might be a better solution. Now that I see it I think that might have worked better. (Especially due to the formslib bug we have that makes it impossible to have two forms on the same page and thus probably makes the current solution not accessible). Nonetheless, since we now have promising usability test results for my design and none for that of WordPress, I still recommend keeping my design (patch) for now.

Other notes

TinyMCE and other parts of the file uploading were in English, although the browser and rest of Moodle were in Finnish since all the test subjects were Finnish. This may have slowed users down but all the test subjects demonstrated during the tests that they understood the English they encountered.

All test subjects were my friends and in theory this might have introduced a bias. However, as they were unfamiliar with the UI in question and I did not help them during their test taking, the issues they encountered seemed genuine. This can be disputed though and I welcome discussion about this style of low-fidelity testing.

The videos taken (screen image and recorded voice in Finnish) are available on request.

I hope to explain how I did this round of testing and which were the parts where I made it easier for myself. There will hopefully be a blog posting or a video, to show the community how little work this can be.

The test preparatory talks and setting were roughly the same as in last summer’s Quiz UI tests, though less formal.