In visual communication, people glean insights about patterns of data by observing visual representations of datasets. Colormap data visualizations (“colormaps”) show patterns in datasets by mapping variations in color to variations in magnitude. When people interpret colormaps, they have expectations about how colors map to magnitude, and they are better at interpreting visualizations that align with those expectations. For example, they infer that darker colors map to larger quantities (dark-is-more bias) and colors that are higher on vertically oriented legends map to larger quantities (high-is-more bias). In previous studies, the notion of quantity was straightforward because more of the concept represented (conceptual magnitude) corresponded to larger numeric values (numeric magnitude). However, conceptual and numeric magnitude can conflict, such as using rank order to quantify health—smaller numbers correspond to greater health. Under conflicts, are inferred mappings formed based on the numeric level, the conceptual level, or a combination of both? We addressed this question across five experiments, spanning data domains: alien animals, antibiotic discovery, and public health. Across experiments, the high-is-more bias operated at the conceptual level: colormaps were easier to interpret when larger conceptual magnitude was represented higher on the legend, regardless of numeric magnitude. The dark-is-more bias tended to operate at the conceptual level, but numeric magnitude could interfere, or even dominate, if conceptual magnitude was less salient. These results elucidate factors influencing meanings inferred from visual features and emphasize the need to consider data meaning, not just numbers, when designing visualizations aimed to facilitate visual communication.
- Award ID(s):
- 1945303
- PAR ID:
- 10421657
- Date Published:
- Journal Name:
- IEEE Transactions on Visualization and Computer Graphics
- Volume:
- 29
- Issue:
- 1
- ISSN:
- 1077-2626
- Page Range / eLocation ID:
- 385 - 395
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
People have associations between colors and concepts that influence the way they interpret color meaning in information visualizations (e.g., charts, maps, diagrams). These associations are not limited to concrete objects (e.g., fruits, vegetables); even abstract concepts, like sleeping and driving, have systematic color-concept associations. However, color-concept associations and color meaning (color semantics) are not the same thing, and sometimes they conflict. This article describes an approach to understanding color semantics called the color inference framework. The framework shows how color semantics is highly flexible and context dependent, which makes color an effective medium for communication.
-
Abstract Color encoding is foundational to visualizing quantitative data. Guidelines for colormap design have traditionally emphasized perceptual principles, such as order and uniformity. However, colors also evoke cognitive and linguistic associations whose role in data interpretation remains underexplored. We study how two linguistic factors, name salience and name variation, affect people's ability to draw inferences from spatial visualizations. In two experiments, we found that participants are better at interpreting visualizations when viewing colors with more salient names (e.g., prototypical ‘blue’, ‘yellow’, and ‘red’ over ‘teal’, ‘beige’, and ‘maroon’). The effect was robust across four visualization types, but was more pronounced in continuous (e.g., smooth geographical maps) than in similar discrete representations (e.g., choropleths). Participants' accuracy also improved as the number of nameable colors increased, although the latter had a less robust effect. Our findings suggest that color nameability is an important design consideration for quantitative colormaps, and may even outweigh traditional perceptual metrics. In particular, we found that the linguistic associations of color are a better predictor of performance than the perceptual properties of those colors. We discuss the implications and outline research opportunities. The data and materials for this study are available at
https://osf.io/asb7n -
Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language—relational language—and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., “Red is on the left of blue”), while those in the control condition heard generic non-relational language (e.g., “Look at this one, look at it closely”). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners’ attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed.more » « less
-
Fair Learning To Rank (LTR) frameworks require demographic information; however, that information is often unavailable. Inference algorithms may infer the missing demographic information to supply to the fair LTR model. In this study, we analyze the effect of using a trained fair LTR model with uncertain demographic inferences. We show that inferred data results in varying levels of fairness and utility depending on inference accuracy. Specifically, less accurate inferred data adversely affects the rankings’ fairness, while more accurate inferred data creates fairer rankings. Therefore, we recommend that a careful evaluation of demographic inference algorithms before use is critical.more » « less