Creators of data visualizations currently receive little or no information about how well their audiences can read the visualizations they deploy. Yet visualization practitioners who seek research-backed design guidance must rely on stringent, overly generalized design recommendations from studies in data visualization. Misunderstood recommendations following visualization evaluation studies may not only artificially reduce the design space of possible chart types creators can use but could also disadvantage sub-populations of readers whose abilities do not align with published visualization best practices.

One aspect is scale: people are encountering more data visualizations in their news, social media, television and work than ever before. Another aspect is diversity: people have a range of backgrounds and experience, which may shape their ability to effectively extract information from visualizations they encounter. This project explores a model where experiments are transformed from evaluating different visualization types (e.g. bars versus pies) to evaluating different people. The controlled experiments will vary participant expertise, test hypothesized correlates of visualization performance (e.g. numeracy and spatial ability), and use transparent statistical methodologies to establish dimensions of individual differences in visualization performance.