Interestingly the UX book chapter 13 was about rapid evaluation methods. As I pointed from last blog, there can be an difference of the opinion even between the experts. (Maybe I am not titled for an expert yet.) I will give another example. I am currently in the class of the “Human Factors in Engineering” by Prof. Robert Proctor. Last week’s homework was evaluating the Europa, the European Union’s website with the heuristics by Jakob Nielsen. Among the factors, there was an “Aesthetic and minimalist design”. I thought the website was minimalist in approach. Though there were many contents, (EU surely has lots of things to say.) they were well organized and there was no flashing banner or eye-candy to distract. While submitting I asked to TA about her opinion. And she said, there are too many contents so they violate the principles. (She is a president of student chapter in Purdue University of The Human Factors and Ergonomics Society (HFES). And this chapter received the HFES Gold or Silver award for Outstanding Student chapter since 2008 to 2011, which is 4 times in a row! Maybe I should submit to the authority.) As I said before, I don’t think I am right and she’s wrong. It just shows the limit of the tools we use. We were using same heuristic standard to different conclusion. And my view is supported by an academic research by Herzum & Jacobsen (2003), “which essentially states that the variation among results from different evaluators using the same method on the same target application can be so large as to call into question the effectiveness of the methods. One good side effect of this limitation is our job will never be automated by computer algorithms. And for those native Americans, their interaction design job will not be outsourced to the 3rd world cheap labor, for there is cultural differences.

And here is another story about Nielsen’s another heuristics that I couldn’t agree more. When I was at the university healthcare center for the immunization test, there was an computer error at the nurse’s computer. And guess what was an error message like. I am sorry that I didn’t take picture. There was an error code starting with 0×00 and a few sentences that error has happened, with big red stop sign. (Everybody knows that error has happened.) But the error message didn’t say any helpful message for user for possible solution or the nature of a cause. (I checked the error screen myself carefully for help.)

It was very obvious violation of the discipline which states that “Error messages should be expressed in plain language(no codes), indicate the problem precisely, and suggest a solution constructively.” And the poor old nurse, who I think must be a very intelligent lady, could not solve the problem until I leave after talking to a practitioner. She might felt frustration, which is not good. Wait lady, we, interaction designers, are coming to rescue!And in the book, there is pros and cons of rapid evaluation methods. There is one more advantage of rapid evaluation with a few UX practitioners or experts over the full usability evaluation with many real users. For an interaction design to be successful, it is imperative that the UX feedback is delivered to the design or developer team as early as possible. That leads to the use of low fidelity of wireframe model of an interface during evaluation. However, real users tend to be annoyed or affected by the low quality of the visual than a trained experts do. So their focus or criticisms are more on the minute aesthetics, which can be easily fixed at later stage. Though the facilitator can explain to the normal users that this is just for the evaluation, their ability to augment the humble visual is limited when compared to experts.I also agree with the view, that what is important is follow-ups. As mentioned it the 496 page of the text, “Now that you have found the usability problems, what is next?” John and Marks (1997) ,applying the lessons from the usability is a whole new issue requiring organizational view. Sometimes nobody reads usability reports (Worst case scenario). Many cases, it is too late anyway. (Developer team is already behind launch schedule.) And that’s where a dictator like Steve Jobs could excel compared to more democratic or hierarchical organization. If the CEO of the company was the participant of the rapid evaluation, who can ignore the reports?