Objective

In his own words, MacEachren is aiming to "provide a base for developing ... a conceptual framework to understand and facilitate visually-enabled reasoning/decision-making under uncertainty and to use that framework to develop visual analytics methods to achieve this objective".

Summary

MacEachren's primary contention in this paper is that too little research has looked at how visual interfaces actually facilitate reasoning under uncertainty, rather than just visualize the uncertain components of data for consumption by sense-makers. He suggests that one solution towards more meaningful, comparable studies is to to create a framwork of the nature, problems, and methods for assessing tool effectiveness while reasoning under uncertainty.

MacEachren draws the foundaiton of his framework from three other papers. Kahneman and Tversky (1982), who segmented uncertainty into distributional, singular, reasoned, and introspective components, which connect back to external (distributional, singular) and internal (reasoned, introspective) sources of uncertainty. Courtney (2003), who defined four levels of uncertainty — a clear enough future, a choice among alternative futures, a range of futures, and true ambiguity — related to the possible outcomes of a decision. Zack (2007), who categorized the broader space of uncetainty into four sub-categories — uncertainty, complexity, ambiguity, equivocality — based on the amount information or knowledge avilable. These classifications of uncertainty allow researchers, designers, and developers to understand where uncertainty is coming from in the reasoning process and how to adapt their tools to assist the user in their process.

At the end of the paper MacEachren outlines the following challenges facing future research in this area.

understand components of uncertainty and their relationships to: use domains, information needs, and expertise;

assess usability and utility of the methods/tools — design studies for reproducibility and comparability.

Thoughts and Reactions

I think this paper is a step along the right path. It provides the components for the framework MacEachren is trying to build, but more work is needed to formalize this framework into an actionable set of guidelines that can inform tool evaluation in fields like human-factors and cognition.

I agree with MacEachren's assertion that the reasoning tasks in most studies are not representative of real-world scenarios. Unfortunately, this is a difficult problem to solve. In particular, I see two large challenges in creating and evaluating these tasks.

From the creation standpoint, it is possible (perhaps even easy) to design a sufficiently complex reasoning task. However, defining a task in a way that makes it comparable with others is very difficult. The most efficient solution to this problem, from my perspective, is to curate a standard list of reasoning tasks that can be used in evaluations across tools.

From the evaluation standpoint, academic researchers, especially those in a university setting, are often limited in the subject pools they can draw from. These sorts of decision making tasks often require a certain level of experience/expertise to be relevant to practitioners in a domain. Subject pools at unviersities tend to be novice undergraduate students, making it difficult to infer meaningful relationships between tools and reasoning processes. A possible partial solution to this would be a move to more targeted participant selection techniques, using services like Subjects Wanted. Additionally, there is some question, at least in my mind, of which methods (e.g. interaction logging, eye tracking) and metrics (e.g. speed, accuracy, precision) should be used to compare performance across tools and domains.

One thing I believe is missing from MacEahren's initial list of challenges is an expicit acknowledgement of collaboration. Reasoning is increasingly a team activity, with mixed teams of generalists and specialists working to understand the outcomes associated with decisions (and their associated courses of action). I know that in my own work we have had to think about and account for changes in metric calculations in team-based analysis. I think a specific callout to this may help keep researchers focused on the understanding of uncertaity in dyanmic team environments.

In Aug 2012 I started down a path that was really fun in some regards, and really trying in others. In Dec 2014, I finished my Master's of Science degree in Geography at Penn State. For my thesis, I chose to write two somewhat-related papers looking at interpretive uncertainty and a taxonomy evaluation of cartographic point symbols.

I was fortunate enough to be attend the Annual NACIS Meeting in Greenville, SC. I simply cannot recommend this conference enough to cartographers, designers, and developers working with maps in any way.

Below are the slides to the presentation I gave on my thesis work. The title, "Questions Facing Map Design in the Age of Mobility and Siri", may be misleading as my interests have changed since I originally submitted my talk. Now, I am focusing on how uncertainty about what a point symbol represents on a map affects the decision making process, and this talk reflects that change.