skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Towards Concept-Driven Visual Analytics
Visualizations of data provide a proven method for analysts to explore and make data-driven discoveries. However, current visualization tools provide only limited support for hypothesis-driven analyses, and often lack capabilities that would allow users to visually test the fit of their conceptual models against the data. This imbalance could bias users to overly rely on exploratory visual analysis as the principal mode of inquiry, which can be detrimental to discovery. To address this gap, we propose a new paradigm for ‘concept-driven’ visual analysis. In this style of analysis, analysts share their conceptual models and hypotheses with the system. The system then uses those inputs to drive the generation of visualizations, while providing plots and interactions to explore places where models and data disagree. We discuss key characteristics and design considerations for concept-driven visualizations, and report preliminary findings from a formative study.  more » « less
Award ID(s):
1755611
PAR ID:
10089463
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Conference on Visual Analytics Science and Technology (extended abstract)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Data visualization provides a powerful way for analysts to explore and make data-driven discoveries. However, current visual analytic tools provide only limited support for hypothesis-driven inquiry, as their built-in interactions and workflows are primarily intended for exploratory analysis. Visualization tools notably lack capabilities that would allow users to visually and incrementally test the fit of their conceptual models and provisional hypotheses against the data. This imbalance could bias users to overly rely on exploratory analysis as the principal mode of inquiry, which can be detrimental to discovery. In this paper, we introduce Visual (dis) Confirmation, a tool for conducting confirmatory, hypothesis-driven analyses with visualizations. Users interact by framing hypotheses and data expectations in natural language. The system then selects conceptually relevant data features and automatically generates visualizations to validate the underlying expectations. Distinctively, the resulting visualizations also highlight places where one's mental model disagrees with the data, so as to stimulate reflection. The proposed tool represents a new class of interactive data systems capable of supporting confirmatory visual analysis, and responding more intelligently by spotlighting gaps between one's knowledge and the data. We describe the algorithmic techniques behind this workflow. We also demonstrate the utility of the tool through a case study. 
    more » « less
  2. Visualization tools facilitate exploratory data analysis, but fall short at supporting hypothesis-based reasoning. We conducted an exploratory study to investigate how visualizations might support a concept-driven analysis style, where users can optionally share their hypotheses and conceptual models in natural language, and receive customized plots depicting the fit of their models to the data. We report on how participants leveraged these unique affordances for visual analysis. We found that a majority of participants articulated meaningful models and predictions, utilizing them as entry points to sensemaking. We contribute an abstract typology representing the types of models participants held and externalized as data expectations. Our findings suggest ways for rearchitecting visual analytics tools to better support hypothesis- and model-based reasoning, in addition to their traditional role in exploratory analysis. We discuss the design implications and reflect on the potential benefits and challenges involved. 
    more » « less
  3. Abstract The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever‐expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent‐based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed‐initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively. 
    more » « less
  4. Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual “insights”. We formally evaluate the quality of causal inferences from visualizations by adopting causal support—a Bayesian cognition model that learns the probability of alternative causal explanations given some data—as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users’ causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts’ mental models more explicit in VA software. 
    more » « less
  5. Interactive dimensionality reduction helps analysts explore the high dimensional data based on their personal needs and domain-specific problems. Recently, expressive nonlinear models are employed to support these tasks. However, the interpretation of these human steered nonlinear models during human-in-the-loop analysis has not been explored. To address this problem, we present a new visual explanation design called semantic explanation. Semantic explanation visualizes model behaviors in a manner that is similar to users’ direct projection manipulations. This design conforms to the spatial analytic process and enables analysts better understand the updated model in response to their interactions. We propose a pipeline to empower interactive dimensionality reduction with semantic explanation using counterfactuals. Based on the pipeline, we implement a visual text analytics system with nonlinear dimensionality reduction powered by deep learning via the BERT model. We demonstrate the efficacy of semantic explanation with two case studies of academic article exploration and intelligence analysis. 
    more » « less