skip to main content


Title: FAIR and Interactive Data Graphics from a Scientific Knowledge Graph
Abstract

Graph databases capture richly linked domain knowledge by integrating heterogeneous data and metadata into a unified representation. Here, we present the use of bespoke, interactive data graphics (bar charts, scatter plots, etc.) for visual exploration of a knowledge graph. By modeling a chart as a set of metadata that describes semantic context (SPARQL query) separately from visual context (Vega-Lite specification), we leverage the high-level, declarative nature of the SPARQL and Vega-Lite grammars to concisely specify web-based, interactive data graphics synchronized to a knowledge graph. Resources with dereferenceable URIs (uniform resource identifiers) can employ the hyperlink encoding channel or image marks in Vega-Lite to amplify the information content of a given data graphic, and published charts populate a browsable gallery of the database. We discuss design considerations that arise in relation to portability, persistence, and performance. Altogether, this pairing of SPARQL and Vega-Lite—demonstrated here in the domain of polymer nanocomposite materials science—offers an extensible approach to FAIR (findable, accessible, interoperable, reusable) scientific data visualization within a knowledge graph framework.

 
more » « less
Award ID(s):
1835677 1835648
NSF-PAR ID:
10367807
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Data
Volume:
9
Issue:
1
ISSN:
2052-4463
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes. 
    more » « less
  2. Visualization recommender systems attempt to automate design decisions spanning choices of selected data, transformations, and visual encodings. However, across invocations such recommenders may lack the context of prior results, producing unstable outputs that override earlier design choices. To better balance automated suggestions with user intent, we contribute Dziban, a visualization API that supports both ambiguous specification and a novel anchoring mechanism for conveying desired context. Dziban uses the Draco knowledge base to automatically complete partial specifications and suggest appropriate visualizations. In addition, it extends Draco with chart similarity logic, enabling recommendations that also remain perceptually similar to a provided “anchor” chart. Existing APIs for exploratory visualization, such as ggplot2 and Vega-Lite, require fully specified chart definitions. In contrast, Dziban provides a more concise and flexible authoring experience through automated design, while preserving predictability and control through anchored recommendations. 
    more » « less
  3. null (Ed.)
    Establishing common ground and maintaining shared awareness amongst participants is a key challenge in collaborative visualization. For real-time collaboration, existing work has primarily focused on synchronizing constituent visualizations - an approach that makes it difficult for users to work independently, or selectively attend to their collaborators' activity. To address this gap, we introduce a design space for representing synchronous multi-user collaboration in visualizations defined by two orthogonal axes: situatedness, or whether collaborators' interactions are overlaid on or shown outside of a user's view, and specificity, or whether collaborators are depicted through abstract, generic representations or through specific means customized for the given visualization. We populate this design space with a variety of examples including generic and custom synchronized cursors, and user legends that collect these cursors together or reproduce collaborators' views as thumbnails. To build common ground, users can interact with these representations by peeking to take a quick look at a collaborator's view, tracking to follow along with a collaborator in real-time, and forking to independently explore the visualization based on a collaborator's work. We present a reference implementation of a wrapper library that converts interactive Vega-Lite charts into collaborative visualizations. We find that our approach affords synchronous collaboration across an expressive range of visual designs and interaction techniques. 
    more » « less
  4. Introduction: Informational graphics and data representations (e.g., charts and figures) are critical for accessing educational content. Novel technologies, such as the multimodal touchscreen which displays audio, haptic, and visual information, are promising for being platforms of diverse means to access digital content. This work evaluated educational graphics rendered on a touchscreen compared to the current standard for accessing graphical content. Method: Three bar charts and geometry figures were evaluated on student ( N = 20) ability to orient to and extract information from the touchscreen and print. Participants explored the graphics and then were administered a set of questions (11–12 depending on graphic group). In addition, participants’ attitudes using the mediums were assessed. Results: Participants performed statistically significantly better on questions assessing information orientation using the touchscreen than print for both bar chart and geometry figures. No statistically significant difference in information extraction ability was found between mediums on either graphic type. Participants responded significantly more favorably to the touchscreen than the print graphics, indicating them as more helpful, interesting, fun, and less confusing. Discussion: Accessing and orienting to information was highly successful by participants using the touchscreen, and was the preferred means of accessing graphical information when compared to the print image for both geometry figures and bar charts. This study highlights challenges in presenting graphics both on touchscreens and in print. Implications for Practitioners: This study offers preliminary support for the use of multimodal, touchscreen tablets as educational tools. Student ability using touchscreen-based graphics seems to be comparable to traditional types of graphics (large print and embossed, tactile graphics), although further investigation may be necessary for tactile graphic users. In summary, educators of students with blindness and visual impairments should consider ways to utilize new technologies, such as touchscreens, to provide more diverse access to graphical information.

     
    more » « less
  5. null (Ed.)
    Multi- and hyperspectral imaging modalities encompass a growing number of spectral techniques that find many applications in geospatial, biomedical, machine vision and other fields. The rapidly increasing number of applications requires convenient easy-to-navigate software that can be used by new and experienced users to analyse data, and develop, apply and deploy novel algorithms. Herein, we present our platform, IDCube Lite, an Interactive Discovery Cube that performs essential operations in hyperspectral data analysis to realise the full potential of spectral imaging. The strength of the software lies in its interactive features that enable the users to optimise parameters and obtain visual input for the user in a way not previously accessible with other software packages. The entire software can be operated without any prior programming skills allowing interactive sessions of raw and processed data. IDCube Lite, a free version of the software described in the paper, has many benefits compared to existing packages and offers structural flexibility to discover new, hidden features that allow users to integrate novel computational methods. 
    more » « less