People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations ( inferred mappings ) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.
more »
« less
This content will become publicly available on September 17, 2026
Perceptual and Cognitive Foundations of Information Visualization
Information visualization is central to how humans communicate. Designers produce visualizations to represent information about the world, and observers construct interpretations based on the visual input as well as their heuristics, biases, prior knowledge, and beliefs. Several layers of processing go into the design and interpretation of visualizations. This review focuses on processes that observers use for interpretation: perceiving visual features and their interrelations, mapping those visual features onto the concepts they represent, and comprehending information about the world based on observations from visualizations. Observers are more effective at interpreting visualizations when the design is well-aligned with the way their perceptual and cognitive systems naturally construct interpretations. By understanding how these systems work, it is possible to design visualizations that play to their strengths and thereby facilitate visual communication.
more »
« less
- Award ID(s):
- 1945303
- PAR ID:
- 10648638
- Publisher / Repository:
- Annual reviews
- Date Published:
- Journal Name:
- Annual Review of Vision Science
- Volume:
- 11
- Issue:
- 1
- ISSN:
- 2374-4642
- Page Range / eLocation ID:
- 303 to 330
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Data are invaluable for making sense of the world, but critical thinking and observation skills are necessary to support sensemaking and improve data literacy. To develop these skills and build student capacity to connect data to content knowledge and interpret multiple meanings, a new education model was created for middle school science classrooms in partnership with teachers from across the U.S. The Building Insights through Observation (BIO) model uses hands-on, arts-based instructional approaches with science content (specifically geospatial data visualizations) to lead students in sensemaking about global phenomena. BIO builds on techniques established in the visual arts, focusing on how learners engage with, make meaning from, and think critically about visual information. In BIO, sensemaking is supported through students collectively exploring phenomena through a combination of art and datasets, reimagining data visualization themselves, and identifying wonderings that drive future questions. The BIO goals, approach, and key features of the model are described along with its application to plate tectonics as an example. The framework is highly adaptable for most environmental and Earth science topics and supports differentiation for all learners, allowing for slowing down and discussion of ideas and interpretations, which support student-driven sensemaking of data and Earth science content.more » « less
-
Ruis, Andrew R.; Lee, Seung B. (Ed.)While quantitative ethnographers have used epistemic network analysis (ENA) to model trajectories that show change in network structure over time, visualizing trajectory models in a way that facilitates accurate interpretation has been a significant challenge. As a result, ENA has predominantly been used to construct aggregate models, which can obscure key differences in how network structures change over time. This study reports on the development and testing of a new approach to visualizing ENA trajectories. It documents the challenges associated with visualizing ENA trajectory models, the features constructed to address those challenges, and the design decisions that aid in the interpretation of trajectory models. To test this approach, we compare ENA trajectory models with aggregate models using a dataset with previously published results and known temporal features. This comparison focuses on interpretability and consistency with prior qualitative analysis, and we show that ENA trajectories are able to represent information unavailable in aggregate models and facilitate interpretations consistent with qualitative findings. This suggests that this approach to ENA trajectories is an effective tool for representing change in network structure over time.more » « less
-
Attempting to make sense of a phenomenon or crisis, social media users often share data visualizations and interpretations that can be erroneous or misleading. Prior work has studied how data visualizations can mislead, but do misleading visualizations reach a broad social media audience? And if so, do users amplify or challenge misleading interpretations? To answer these questions, we conducted a mixed-methods analysis of the public's engagement with data visualization posts about COVID-19 on Twitter. Compared to posts with accurate visual insights, our results show that posts with misleading visualizations garner more replies in which the audiences point out nuanced fallacies and caveats in data interpretations. Based on the results of our thematic analysis of engagement, we identify and discuss important opportunities and limitations to effectively leveraging crowdsourced assessments to address data-driven misinformation.more » « less
-
Humans routinely extract important information from images and videos, relying on their gaze. In contrast, computational systems still have difficulty annotating important visual information in a human-like manner, in part because human gaze is often not included in the modeling process. Human input is also particularly relevant for processing and interpreting affective visual information. To address this challenge, we captured human gaze, spoken language, and facial expressions simultaneously in an experiment with visual stimuli characterized by subjective and affective content. Observers described the content of complex emotional images and videos depicting positive and negative scenarios and also their feelings about the imagery being viewed. We explore patterns of these modalities, for example by comparing the affective nature of participant-elicited linguistic tokens with image valence. Additionally, we expand a framework for generating automatic alignments between the gaze and spoken language modalities for visual annotation of images. Multimodal alignment is challenging due to their varying temporal offset. We explore alignment robustness when images have affective content and whether image valence influences alignment results. We also study if word frequency-based filtering impacts results, with both the unfiltered and filtered scenarios performing better than baseline comparisons, and with filtering resulting in a substantial decrease in alignment error rate. We provide visualizations of the resulting annotations from multimodal alignment. This work has implications for areas such as image understanding, media accessibility, and multimodal data fusion.more » « less
An official website of the United States government
