skip to main content


Title: Misleading Beyond Visual Tricks: How People Actually Lie with Charts
Data visualizations can empower an audience to make informed decisions. At the same time, deceptive representations of data can lead to inaccurate interpretations while still providing an illusion of data-driven insights. Existing research on misleading visualizations primarily focuses on examples of charts and techniques previously reported to be deceptive. These approaches do not necessarily describe how charts mislead the general population in practice. We instead present an analysis of data visualizations found in a real-world discourse of a significant global event---Twitter posts with visualizations related to the COVID-19 pandemic. Our work shows that, contrary to conventional wisdom, violations of visualization design guidelines are not the dominant way people mislead with charts. Specifically, they do not disproportionately lead to reasoning errors in posters' arguments. Through a series of examples, we present common reasoning errors and discuss how even faithfully plotted data visualizations can be used to support misinformation online.  more » « less
Award ID(s):
2041136
NSF-PAR ID:
10481331
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 21
Format(s):
Medium: X
Location:
Hamburg Germany
Sponsoring Org:
National Science Foundation
More Like this
  1. Deceptive design patterns (sometimes called “dark patterns”) are user interface design elements that may trick, deceive, or mislead users into behaviors that often benefit the party implementing the design over the end user. Prior work has taxonomized, investigated, and measured the prevalence of such patterns primarily in visual user interfaces (e.g., on websites). However, as the ubiquity of voice assistants and other voice-assisted technologies increases, we must anticipate how deceptive designs will be (and indeed, are already) deployed in voice interactions. This paper makes two contributions towards characterizing and surfacing deceptive design patterns in voice interfaces. First, we make a conceptual contribution, identifying key characteristics of voice interfaces that may enable deceptive design patterns, and surfacing existing and theoretical examples of such patterns. Second, we present the findings from a scenario-based user survey with 93 participants, in which we investigate participants’ perceptions of voice interfaces that we consider to be both deceptive and non-deceptive. 
    more » « less
  2. The visualization community has seen a rise in the adoption of user studies. Empirical user studies systematically test the assumptions that we make about how visualizations can help or hinder viewers’ performance of tasks. Although the increase in user studies is encouraging, it is vital that research on human reasoning with visualizations be grounded in an understanding of how the mind functions. Previously, there were no sufficient models that illustrate the process of decision-making with visualizations. However, Padilla et al. [41] recently proposed an integrative model for decision-making with visualizations, which expands on modern theories of visualization cognition and decision-making. In this paper, we provide insights into how cognitive models can accelerate innovation, improve validity, and facilitate replication efforts, which have yet to be thoroughly discussed in the visualization community. To do this, we offer a compact overview of the cognitive science of decision-making with visualizations for the visualization community, using the Padilla et al. [41] cognitive model as a guiding framework. By detailing examples of visualization research that illustrate each component of the model, this paper offers novel insights into how visualization researchers can utilize a cognitive framework to guide their user studies. We provide practical examples of each component of the model from empirical studies of visualizations, along with visualization implications of each cognitive process, which have not been directly addressed in prior work. Finally, this work offers a case study in utilizing an understanding of human cognition to generate a novel solution to a visualization reasoning bias in the context of hurricane forecast track visualizations. 
    more » « less
  3. Network visualization is one of the most widely used tools in digital humanities research. The idea of uncertain or “fuzzy” data is also a core notion in digital humanities research. Yet network visualizations in digital humanities do not always prominently represent uncertainty. In this article, we present a mathematical and logical model of uncertainty as a range of values which can be used in network visualizations. We review some of the principles for visualizing uncertainty of different kinds, visual variables that can be used for representing uncertainty, and how these variables have been used to represent different data types in visualizations drawn from a range of non-humanities fields like climate science and bioinformatics. We then provide examples of two diagrams: one in which the variables displaying degrees of uncertainty are integrated/pinto the graph and one in which glyphs are added to represent data certainty and uncertainty. Finally, we discuss how probabilistic data and what-if scenarios could be used to expand the representation of uncertainty in humanities network visualizations.

     
    more » « less
  4. null (Ed.)
    Establishing common ground and maintaining shared awareness amongst participants is a key challenge in collaborative visualization. For real-time collaboration, existing work has primarily focused on synchronizing constituent visualizations - an approach that makes it difficult for users to work independently, or selectively attend to their collaborators' activity. To address this gap, we introduce a design space for representing synchronous multi-user collaboration in visualizations defined by two orthogonal axes: situatedness, or whether collaborators' interactions are overlaid on or shown outside of a user's view, and specificity, or whether collaborators are depicted through abstract, generic representations or through specific means customized for the given visualization. We populate this design space with a variety of examples including generic and custom synchronized cursors, and user legends that collect these cursors together or reproduce collaborators' views as thumbnails. To build common ground, users can interact with these representations by peeking to take a quick look at a collaborator's view, tracking to follow along with a collaborator in real-time, and forking to independently explore the visualization based on a collaborator's work. We present a reference implementation of a wrapper library that converts interactive Vega-Lite charts into collaborative visualizations. We find that our approach affords synchronous collaboration across an expressive range of visual designs and interaction techniques. 
    more » « less
  5. Chart construction errors, such as truncated axes or inexpressive visual encodings, can hinder reading a visualization, or worse, imply misleading facts about the underlying data. These errors can be caught by critical readings of visualizations, but readers must have a high level of data and design literacy and must be paying close attention. To address this issue, we introduce VisuaLint: a technique for surfacing chart construction errors in situ. Inspired by the ubiquitous red wavy underline that indicates spelling mistakes, visualization elements that contain errors (e.g., axes and legends) are sketchily rendered and accompanied by a concise annotation. VisuaLint is unobtrusive — it does not interfere with reading a visualization — and its di- rect display establishes a close mapping between erroneous elements and the expression of error. We demonstrate five examples of VisualLint and present the results of a crowdsourced evaluation (N = 62) of its efficacy. These results contribute an empiri- cal baseline proficiency for recognizing chart construction errors, and indicate near-universal difficulty in error identification. We find that people more reliably identify chart construction errors after being shown examples of VisuaLint, and prefer more verbose explanations for unfamiliar or less obvious flaws. 
    more » « less