This is a blog devoted to researching the cognitive effects of Virtual and Augmented Reality.
Our Research Question is - "How can synthetic embodied VR/AR environments enhance aspects of human cognition?"
The blog shows outcomes of our research projects, such as papers, videos, paper reviews and other useful artifacts.

We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

#Comments#

Useful paper, due to its use of heuristic evaluations as a assessment tool for visualisations, published in one of the top journals in the field.

Makes interesting comments on the need for tools to be from the scientist's point of view, and not a graphical point of view.

In essence it uses parallel coordinates to represent volumetric parameters for analysis and modification in volumetric medical visualisation.

Their technique is about reducing the overhead in exploring a transfer function parameter space, thus the use of parallel coordinates (a nice high dimensionality visualisation space). They use this application structure to drive their selection of heuristics to evaluate on page 72 - sensible approach.

Does this mean it is a "usefulness" evaluation due to the mapping to key tasks?!? Strikes me that usability and usefulness overlap maybe too much, and needs to be carefully teased out in any validation.

They apply the Shneiderman Mantra. They test with 5 people, no information about who the experts were; this is obfuscating. Assume they are visualisation experts as they also tested with one "end-user". Postgrads co-opted?!?!?!? They have had no interaction with a previous parallel coordinates project; important to note.

They set up the data sets ahead of time with default values for parameters. Tasks were to explore, then look for an identifiable object (key). The experts were not end users with their "own goals" - Note this! Thus they can use the tool, but are not trained to think in a domain manner.

Researchers used contextual inquiry techniques to form discussions. 11 heuristics were evaluated using 7 point scales. Experts provided a written report on ads/disads - page 76. Their work is based on a Nielson heuristic derivative HCI assessment approach (Chin 1988). There is no mapping from heuristics to numerical measures; specifically no example questionnaire questions.

They use the five experts to rate, across 11 heuristics, tables vs. normal viz vs. parallel coordinates for parameters. They used Wilcoxon signed-rank tests to detect sig. diff. between the three viz. types. I have to question this; n = 5 is simply not significance in size. A larger sample is required, they do not note effect sizes, which adds to my doubts to statistical power. But the Wilcoxon is okay for non parametric distributions not assuming normality, so possibly valid.

They then quote comments from the experts during the evaluation process, but with no evidence of encoding, just collecting comments. They then list a series of improvements the experts suggested.