skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Lightweight Diagramming for Lightweight Formal Methods: A Grounded Language Design
Tools such as Alloy enable users to incrementally define, explore, verify, and diagnose specifications for complex systems. A critical component of these tools is a visualizer that lets users graphically explore generated models. As we show, however, a default visualizer that knows nothing about the domain can be unhelpful and can even actively violate presentational and cognitive principles. At the other extreme, full-blown custom visualization requires significant effort as well as knowledge that a tool user might not possess. Custom visualizations can also exhibit bad (even silent) failures. This paper charts a middle ground between the extremes of default and fully-customizable visualization. We capture essential domain information for lightweight diagramming, embodying this in a language. To identify key elements of lightweight diagrams, we ground the language design in both the cognitive science research on diagrams and in a corpus of 58 custom visualizations. We distill from these sources a small set of orthogonal primitives, and use the primitives to guide a diagramming language called Cope-and-Drag (CnD). We evaluate it on sample tasks, three user studies, and performance, and find that short CnD specifications consistently improve model comprehension over the Alloy default. CnD thus defines a new point in the design space of diagramming: a language that is lightweight, effective, and driven by sound principles.  more » « less
Award ID(s):
2208731
PAR ID:
10642218
Author(s) / Creator(s):
; ; ;
Editor(s):
Aldrich, Jonathan; Silva, Alexandra
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
333
ISSN:
1868-8969
Page Range / eLocation ID:
26:1-26:29
Subject(s) / Keyword(s):
formal methods diagramming visualization language design Human-centered computing → Visualization toolkits Software and its engineering → Constraint and logic languages Software and its engineering → Specification languages
Format(s):
Medium: X Size: 29 pages; 4917984 bytes Other: application/pdf
Size(s):
29 pages 4917984 bytes
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Establishing common ground and maintaining shared awareness amongst participants is a key challenge in collaborative visualization. For real-time collaboration, existing work has primarily focused on synchronizing constituent visualizations - an approach that makes it difficult for users to work independently, or selectively attend to their collaborators' activity. To address this gap, we introduce a design space for representing synchronous multi-user collaboration in visualizations defined by two orthogonal axes: situatedness, or whether collaborators' interactions are overlaid on or shown outside of a user's view, and specificity, or whether collaborators are depicted through abstract, generic representations or through specific means customized for the given visualization. We populate this design space with a variety of examples including generic and custom synchronized cursors, and user legends that collect these cursors together or reproduce collaborators' views as thumbnails. To build common ground, users can interact with these representations by peeking to take a quick look at a collaborator's view, tracking to follow along with a collaborator in real-time, and forking to independently explore the visualization based on a collaborator's work. We present a reference implementation of a wrapper library that converts interactive Vega-Lite charts into collaborative visualizations. We find that our approach affords synchronous collaboration across an expressive range of visual designs and interaction techniques. 
    more » « less
  2. We present P5, a web-based visualization toolkit that combines declarative visualization grammar and GPU computing for progressive data analysis and visualization. To interactively analyze and explore big data, progressive analytics and visualization methods have recently emerged. Progressive visualizations of incrementally refining results have the advantages of allowing users to steer the analysis process and make early decisions. P5 leverages declarative grammar for specifying visualization designs and exploits GPU computing to accelerate progressive data processing and rendering. The declarative specifications can be modified during progressive processing to create different visualizations for analyzing the intermediate results. To enable user interactions for progressive data analysis, P5 utilizes the GPU to automatically aggregate and index data based on declarative interaction specifications to facilitate effective interactive visualization. We demonstrate the effectiveness and usefulness of P5 through a variety of example applications and several performance benchmark tests. 
    more » « less
  3. Conceptual diagrams are used extensively to understand abstract relationships, explain complex ideas, and solve difficult problems. To illustrate concepts effectively, experts find appropriate visual representations and translate concepts into concrete shapes. This translation step is not supported explicitly by current diagramming tools. This paper investigates how domain experts create conceptual diagrams via semi-structured interviews with 18 participants from diverse backgrounds. Our participants create, adapt, and reuse visual representations using both sketches and digital tools. However, they had trouble using current diagramming tools to transition from sketches and reuse components from earlier diagrams. Our participants also expressed frustration with the slow feedback cycles and barriers to automation of their tools. Based on these results, we suggest four opportunities of diagramming tools — exploration support, representation salience, live engagement, and vocabulary correspondence — that together enable a natural diagramming experience. Finally, we discuss possibilities to leverage recent research advances to develop natural diagramming tools. 
    more » « less
  4. M, Murugappan (Ed.)
    Scalp Electroencephalography (EEG) is one of the most popular noninvasive modalities for studying real-time neural phenomena. While traditional EEG studies have focused on identifying group-level statistical effects, the rise of machine learning has prompted a shift in computational neuroscience towards spatio-temporal predictive analyses. We introduce a novel open-source viewer, the EEG Prediction Visualizer (EPViz), to aid researchers in developing, validating, and reporting their predictive modeling outputs. EPViz is a lightweight and standalone software package developed in Python. Beyond viewing and manipulating the EEG data, EPViz allows researchers to load a PyTorch deep learning model, apply it to EEG features, and overlay the output channel-wise or subject-level temporal predictions on top of the original time series. These results can be saved as high-resolution images for use in manuscripts and presentations. EPViz also provides valuable tools for clinician-scientists, including spectrum visualization, computation of basic data statistics, and annotation editing. Finally, we have included a built-in EDF anonymization module to facilitate sharing of clinical data. Taken together, EPViz fills a much needed gap in EEG visualization. Our user-friendly interface and rich collection of features may also help to promote collaboration between engineers and clinicians. 
    more » « less
  5. Users often rely on GUIs to edit and interact with visualizations — a daunting task due to the large space of editing options. As a result, users are either overwhelmed by a complex UI or constrained by a custom UI with a tailored, fixed subset of options with limited editing flexibility. Natural Language Interfaces (NLIs) are emerging as a feasible alternative for users to specify edits. However, NLIs forgo the advantages of traditional GUI: the ability to explore and repeat edits and see instant visual feedback. We introduce DynaVis, which blends natural language and dynamically synthesized UI widgets. As the user describes an editing task in natural language, DynaVis performs the edit and synthesizes a persistent widget that the user can interact with to make further modifications. Study participants (n=24) preferred DynaVis over the NLI-only interface citing ease of further edits and editing confidence due to immediate visual feedback. 
    more » « less