Task Matters When Scanning Data Visualizations
One of the major challenges for evaluating the effectiveness of data visualizations and visual analytics tools arises from the fact that different users may be using these tools for different tasks. In this paper, we present a simple example of how different tasks lead to different patterns of attention to the same underlying data visualizations. We argue that the general approach used in this experiment could be applied systematically to task and feature taxonomies that have been developed by visualization researchers. Using eye tracking to study the impact of common tasks on how humans attend to common types of visualizations will support a deeper understanding of visualization cognition and the development of more robust methods for evaluating the effectiveness of visualizations.
READ FULL TEXT