This paper presents a very interesting study where they had pairs of graduate students design using construction paper visualizations for bubble sort. The various groups developed very different visualizations. And the visualizations did not map well into the AV design system studied in the project (LENS, a version of XTango). One hypothesized rationale for why earlier studies don’t show good results for the pedagogical value of algorithm visualizations is that the AV systems don’t give appropriate visualization tools to the designers.
Introduction: Although others have had somewhat discouraging results regarding the effectiveness of AV technology, the authors feel that there may be a mismatch between the way people comprehend algorithms and how that gets mapped to visualization. To try and understand this mismatch, the authors have some students create “art project”-style AVs using construction paper, colored pencils, etc., and then have some of them create visualizations in LENS, an adaptation of XTango.
Empirical Studies: The visualizations in this experiment are bubble sort and one other sort (not discussed) because the graduate student subjects are intimately familiar with it. Pairs of students were videotaped creating paper visualizations to explain bubble sort to novice users. Constructive interaction and conversational analysis were the primary evaluation tools. Student pairs created a broad array of visualizations: one with numbers, one with colors, and one which simulated a game of football. One interesting feature of all three was the retention of previous states, a sort of external memory for the learner. The groups’ procedures were similar: ﬁrst, step through the algorithm and agree on its workings; next, decide on how to present the information to a novice, understanding what they know and don’t know; ﬁnally, create and reﬁne the visualization.
Accordance with AV Software: Two of the pairs went on to implement in LENS, an interesting events-style AV tool. Students were shown how to create a visualization using LENS, and then were asked to create a bubble sort visualization. Learning curve for LENS was fairly shallow.
Mapping Visualizations: Researchers mapped various components of the paper visualizations to pseudocode and to LENS. This helped them see what was abstracted away and what was missing from each type of visualization. Important areas of comparison: abstract functionality, grain of analysis, perceptual salience, and cultural expectations.
Conclusions: Human visualizations and LENS share similar semantics at a high level of abstract functionality and grain of analysis; perceptual salience is more easily accomplished in the human visualizations; LENS could not meet all cultural expectations—weaker metaphors had to be used.