Visualization grammars are gaining popularity as they allow visualization specialists and experienced users to quickly create static and interactive views. Existing grammars, however, mostly focus on abstract views, ignoring three-dimensional (3D) views, which are very important in fields such as natural sciences. We propose a generalized interaction grammar for the problem of coordinating heterogeneous view types, such as standard charts (e.g., based on Vega-Lite) and 3D anatomical views. An important aspect of our web-based framework is that user interactions with data items at various levels of detail can be systematically integrated and used to control the overall layout of the application workspace. With the help of a concise JSON-based specification of the intended workflow, we can handle complex interactive visual analysis scenarios. This enables rapid prototyping and iterative refinement of the visual analysis tool in collaboration with domain experts. We illustrate the usefulness of our framework in two real-world case studies from the field of neuroscience. Since the logic of the presented grammar-based approach for handling interactions between heterogeneous web-based views is free of any application specifics, it can also serve as a template for applications beyond biological research. 
                        more » 
                        « less   
                    
                            
                            FAIR and Interactive Data Graphics from a Scientific Knowledge Graph
                        
                    
    
            Abstract Graph databases capture richly linked domain knowledge by integrating heterogeneous data and metadata into a unified representation. Here, we present the use of bespoke, interactive data graphics (bar charts, scatter plots, etc.) for visual exploration of a knowledge graph. By modeling a chart as a set of metadata that describes semantic context (SPARQL query) separately from visual context (Vega-Lite specification), we leverage the high-level, declarative nature of the SPARQL and Vega-Lite grammars to concisely specify web-based, interactive data graphics synchronized to a knowledge graph. Resources with dereferenceable URIs (uniform resource identifiers) can employ the hyperlink encoding channel or image marks in Vega-Lite to amplify the information content of a given data graphic, and published charts populate a browsable gallery of the database. We discuss design considerations that arise in relation to portability, persistence, and performance. Altogether, this pairing of SPARQL and Vega-Lite—demonstrated here in the domain of polymer nanocomposite materials science—offers an extensible approach to FAIR (findable, accessible, interoperable, reusable) scientific data visualization within a knowledge graph framework. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10367807
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Scientific Data
- Volume:
- 9
- Issue:
- 1
- ISSN:
- 2052-4463
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes.more » « less
- 
            Visualization recommender systems attempt to automate design decisions spanning choices of selected data, transformations, and visual encodings. However, across invocations such recommenders may lack the context of prior results, producing unstable outputs that override earlier design choices. To better balance automated suggestions with user intent, we contribute Dziban, a visualization API that supports both ambiguous specification and a novel anchoring mechanism for conveying desired context. Dziban uses the Draco knowledge base to automatically complete partial specifications and suggest appropriate visualizations. In addition, it extends Draco with chart similarity logic, enabling recommendations that also remain perceptually similar to a provided “anchor” chart. Existing APIs for exploratory visualization, such as ggplot2 and Vega-Lite, require fully specified chart definitions. In contrast, Dziban provides a more concise and flexible authoring experience through automated design, while preserving predictability and control through anchored recommendations.more » « less
- 
            null (Ed.)Establishing common ground and maintaining shared awareness amongst participants is a key challenge in collaborative visualization. For real-time collaboration, existing work has primarily focused on synchronizing constituent visualizations - an approach that makes it difficult for users to work independently, or selectively attend to their collaborators' activity. To address this gap, we introduce a design space for representing synchronous multi-user collaboration in visualizations defined by two orthogonal axes: situatedness, or whether collaborators' interactions are overlaid on or shown outside of a user's view, and specificity, or whether collaborators are depicted through abstract, generic representations or through specific means customized for the given visualization. We populate this design space with a variety of examples including generic and custom synchronized cursors, and user legends that collect these cursors together or reproduce collaborators' views as thumbnails. To build common ground, users can interact with these representations by peeking to take a quick look at a collaborator's view, tracking to follow along with a collaborator in real-time, and forking to independently explore the visualization based on a collaborator's work. We present a reference implementation of a wrapper library that converts interactive Vega-Lite charts into collaborative visualizations. We find that our approach affords synchronous collaboration across an expressive range of visual designs and interaction techniques.more » « less
- 
            Introduction: Informational graphics and data representations (e.g., charts and figures) are critical for accessing educational content. Novel technologies, such as the multimodal touchscreen which displays audio, haptic, and visual information, are promising for being platforms of diverse means to access digital content. This work evaluated educational graphics rendered on a touchscreen compared to the current standard for accessing graphical content. Method: Three bar charts and geometry figures were evaluated on student ( N = 20) ability to orient to and extract information from the touchscreen and print. Participants explored the graphics and then were administered a set of questions (11–12 depending on graphic group). In addition, participants’ attitudes using the mediums were assessed. Results: Participants performed statistically significantly better on questions assessing information orientation using the touchscreen than print for both bar chart and geometry figures. No statistically significant difference in information extraction ability was found between mediums on either graphic type. Participants responded significantly more favorably to the touchscreen than the print graphics, indicating them as more helpful, interesting, fun, and less confusing. Discussion: Accessing and orienting to information was highly successful by participants using the touchscreen, and was the preferred means of accessing graphical information when compared to the print image for both geometry figures and bar charts. This study highlights challenges in presenting graphics both on touchscreens and in print. Implications for Practitioners: This study offers preliminary support for the use of multimodal, touchscreen tablets as educational tools. Student ability using touchscreen-based graphics seems to be comparable to traditional types of graphics (large print and embossed, tactile graphics), although further investigation may be necessary for tactile graphic users. In summary, educators of students with blindness and visual impairments should consider ways to utilize new technologies, such as touchscreens, to provide more diverse access to graphical information.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
