skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Taggle: Combining overview and details in tabular data visualizations
Most tabular data visualization techniques focus on overviews, yet many practical analysis tasks are concerned with investigating individual items of interest. At the same time, relating an item to the rest of a potentially large table is important. In this work, we present Taggle, a tabular visualization technique for exploring and presenting large and complex tables. Taggle takes an item-centric, spreadsheet-like approach, visualizing each row in the source data individually using visual encodings for the cells. At the same time, Taggle introduces data-driven aggregation of data subsets. The aggregation strategy is complemented by interaction methods tailored to answer specific analysis questions, such as sorting based on multiple columns and rich data selection and filtering capabilities. We demonstrate Taggle by a case study conducted by a domain expert on complex genomics data analysis for the purpose of drug discovery.  more » « less
Award ID(s):
1751238
PAR ID:
10527005
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Information Visualization
Volume:
19
Issue:
2
ISSN:
1473-8716
Format(s):
Medium: X Size: p. 114-136
Size(s):
p. 114-136
Sponsoring Org:
National Science Foundation
More Like this
  1. AbstractÐPrivacy of data as well as providing anonymization of data for various kinds of analysis have been addressed in the context of tabular transactional data which was mainstream. With the advent of the Internet and social networks, there is an emphasis on using different kinds of graphs for modeling and analysis. In addition to single graphs, the use of MultiLayer Networks (or MLNs) for modeling and analysis is becoming popular for complex data having multiple types of entities and relationships. They provide a better understanding of data as well as flexibility and efficiency of analysis. In this article, we understand the provenance of data privacy and some of the thinking on extending it to graph data models. We will focus on the issues of data privacy for models that are different from traditional data models and discuss alternatives. We will also consider privacy from a visualization perspective as we have developed a community Dashboard for MLN generation, analysis, and visualization based on our research. 
    more » « less
  2. Abstract How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high‐profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time‐consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer‐review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data. 
    more » « less
  3. Abstract Large‐scale digitization projects such as#ScanAllFishesandoVertare generating high‐resolution microCT scans of vertebrates by the thousands. Data from these projects are shared with the community using aggregate 3D specimen repositories like MorphoSource through various open licenses. We anticipate an explosion of quantitative research in organismal biology with the convergence of available data and the methodologies to analyse them.Though the data are available, the road from a series of images to analysis is fraught with challenges for most biologists. It involves tedious tasks of data format conversions, preserving spatial scale of the data accurately, 3D visualization and segmentations, and acquiring measurements and annotations. When scientists use commercial software with proprietary formats, a roadblock for data exchange, collaboration and reproducibility is erected that hurts the efforts of the scientific community to broaden participation in research.We developed SlicerMorph as an extension of 3D Slicer, a biomedical visualization and analysis ecosystem with extensive visualization and segmentation capabilities built on proven python‐scriptable open‐source libraries such as Visualization Toolkit and Insight Toolkit. In addition to the core functionalities of Slicer, SlicerMorph provides users with modules to conveniently retrieve open‐access 3D models or import users own 3D volumes, to annotate 3D curve and patch‐based landmarks, generate landmark templates, conduct geometric morphometric analyses of 3D organismal form using both landmark‐driven and landmark‐free approaches, and create 3D animations from their results. We highlight how these individual modules can be tied together to establish complete workflow(s) from image sequence to morphospace. Our software development efforts were supplemented with short courses and workshops that cover the fundamentals of 3D imaging and morphometric analyses as it applies to study of organismal form and shape in evolutionary biology.Our goal is to establish a community of organismal biologists centred around Slicer and SlicerMorph to facilitate easy exchange of data and results and collaborations using 3D specimens. Our proposition to our colleagues is that using a common open platform supported by a large user and developer community ensures the longevity and sustainability of the tools beyond the initial development effort. 
    more » « less
  4. Two people looking at the same dataset will create different mental models, prioritize different attributes, and connect with different visualizations. We seek to understand the space of data abstractions associated with mental models and how well people communicate their mental models when sketching. Data abstractions have a profound influence on the visualization design, yet it’s unclear how universal they may be when not initially influenced by a representation. We conducted a study about how people create their mental models from a dataset. Rather than presenting tabular data, we presented each participant with one of three datasets in paragraph form, to avoid biasing the data abstraction and mental model. We observed various mental models, data abstractions, and depictions from the same dataset, and how these concepts are influenced by communication and purpose-seeking. Our results have implications for visualization design, especially during the discovery and data collection phase. 
    more » « less
  5. We consider the problem of test-time adaptation of predictive models trained on tabular data. Effective solution of this problem requires adaptation of predictive models trained on the source domain to a target domain, using only unlabeled target domain data, without access to source domain data. Existing test-time adaptation methods for tabular data have difficulty coping with the heterogeneous features and their complex dependencies inherent in tabular data. To overcome these limitations, we consider test-time adaptation in the setting wherein the logical structure of the rules is assumed to remain invariant despite distribution shift between source and target domains whereas the numerical parameters associated with the rules and the weights assigned to them can vary to accommodate distribution shift. TabLog discretizes numerical features, models dependencies between heterogeneous features, introduces a novel contrastive loss for coping with distribution shift, and presents an end-to-end framework for efficient training and test-time adaptation by taking advantage of a logical neural network representation of a rule ensemble. We present results of experiments using several benchmark data sets that demonstrate TabLog is competitive with or improves upon the state-of-the-art methods for testtime adaptation of predictive models trained on tabular data. Our code is available at https:// github.com/WeijieyingRen/TabLog. 
    more » « less