We present Firefly, a new browser-based interactive tool for visualizing 3D particle data sets. On a typical personal computer, Firefly can simultaneously render and enable real-time interactions with ≳10 million particles, and can interactively explore data sets with billions of particles using the included custom-built octree render engine. Once created, viewing a Firefly visualization requires no installation and is immediately usable in most modern internet browsers simply by visiting a URL. As a result, a Firefly visualization works out-of-the-box on most devices including smartphones and tablets. Firefly is primarily developed for researchers to explore their own data, but can also be useful to communicate results to researchers and/or collaborators and as an effective public outreach tool. Every element of the user interface can be customized and disabled, enabling easy adaptation of the same visualization for different audiences with little additional effort. Creating a new Firefly visualization is simple with the provided Python data preprocessor that translates input data to a Firefly-compatible format and provides helpful methods for hosting instances of Firefly both locally and on the internet. In addition to visualizing the positions of particles, users can visualize vector fields (e.g., velocities) and also filter and color points by scalar fields. We share three examples of Firefly applied to astronomical data sets: (1) the FIRE cosmological zoom-in simulations, (2) the SDSS galaxy catalog, and (3) Gaia Data Release 3. A gallery of additional interactive demos is available at
- NSF-PAR ID:
- 10401966
- Publisher / Repository:
- DOI PREFIX: 10.3847
- Date Published:
- Journal Name:
- The Astrophysical Journal Supplement Series
- Volume:
- 265
- Issue:
- 2
- ISSN:
- 0067-0049
- Format(s):
- Medium: X Size: Article No. 38
- Size(s):
- Article No. 38
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Recent research in empirical software engineering is applying techniques from neurocognitive science and breaking new grounds in the ways that researchers can model and analyze the cognitive processes of developers as they interact with software artifacts. However, given the novelty of this line of research, only one tool exists to help researchers represent and analyze this kind of multi-modal biometric data. While this tool does help with visualizing temporal eyetracking and physiological data, it does not allow for the mapping of physiological data to source code elements, instead projecting information over images of code. One drawback of this is that researchers are still unable to meaningfully combine and map physiological and eye tracking data to source code artifacts. The use of images also bars the support of long or multiple code files, which prevents researchers from analyzing data from experiments conducted in realistic settings. To address these drawbacks, we propose VITALSE, a tool for the interactive visualization of combined multi-modal biometric data for software engineering tasks. VITALSE provides interactive and customizable temporal heatmaps created with synchronized eyetracking and biometric data. The tool supports analysis on multiple files, user defined annotations for points of interest over source code elements, and high level customizable metric summaries for the provided dataset. VITALSE, a video demonstration, and sample data to demonstrate its capabilities can be found at http://www.vitalse.app.more » « less
-
Abstract We present the first release of the Gravitational Wave AfterglowPy Analysis (GWAPA) webtool (Available at
https://gwapa.web.roma2.infn.it/ ). GWAPA is designed to provide the community with an interactive tool for rapid analysis of gravitational wave afterglow counterparts and can be extended to the general case of gamma-ray burst afterglows seen at different angles. It is based on theafterglowpy package and allows users to upload observational data and vary afterglow parameters to infer the properties of the explosion. Multiple jet structures, including top hat, Gaussian and power laws, in addition to a spherical outflow model are implemented. APython script for MCMC fitting is also available to download, with initial guesses taken from GWAPA. -
Abstract In pursuit of scientific discovery, vast collections of unstructured structural and functional images are acquired; however, only an infinitesimally small fraction of this data is rigorously analyzed, with an even smaller fraction ever being published. One method to accelerate scientific discovery is to extract more insight from costly scientific experiments already conducted. Unfortunately, data from scientific experiments tend only to be accessible by the originator who knows the experiments and directives. Moreover, there are no robust methods to search unstructured databases of images to deduce correlations and insight. Here, we develop a machine learning approach to create image similarity projections to search unstructured image databases. To improve these projections, we develop and train a model to include symmetry-aware features. As an exemplar, we use a set of 25,133 piezoresponse force microscopy images collected on diverse materials systems over five years. We demonstrate how this tool can be used for interactive recursive image searching and exploration, highlighting structural similarities at various length scales. This tool justifies continued investment in federated scientific databases with standardized metadata schemas where the combination of filtering and recursive interactive searching can uncover synthesis-structure-property relations. We provide a customizable open-source package (
https://github.com/m3-learning/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer ) of this interactive tool for researchers to use with their data. -
Abstract Background Direct-sequencing technologies, such as Oxford Nanopore’s, are delivering long RNA reads with great efficacy and convenience. These technologies afford an ability to detect post-transcriptional modifications at a single-molecule resolution, promising new insights into the functional roles of RNA. However, realizing this potential requires new tools to analyze and explore this type of data.
Result Here, we present Sequoia, a visual analytics tool that allows users to interactively explore nanopore sequences. Sequoia combines a Python-based backend with a multi-view visualization interface, enabling users to import raw nanopore sequencing data in a Fast5 format, cluster sequences based on electric-current similarities, and drill-down onto signals to identify properties of interest. We demonstrate the application of Sequoia by generating and analyzing ~ 500k reads from direct RNA sequencing data of human HeLa cell line. We focus on comparing signal features from m6A and m5C RNA modifications as the first step towards building automated classifiers. We show how, through iterative visual exploration and tuning of dimensionality reduction parameters, we can separate modified RNA sequences from their unmodified counterparts. We also document new, qualitative signal signatures that characterize these modifications from otherwise normal RNA bases, which we were able to discover from the visualization.
Conclusions Sequoia’s interactive features complement existing computational approaches in nanopore-based RNA workflows. The insights gleaned through visual analysis should help users in developing rationales, hypotheses, and insights into the dynamic nature of RNA. Sequoia is available at
https://github.com/dnonatar/Sequoia . -
Abstract We present a critical analysis of physics-informed neural operators (PINOs) to solve partial differential equations (PDEs) that are ubiquitous in the study and modeling of physics phenomena using carefully curated datasets. Further, we provide a benchmarking suite which can be used to evaluate PINOs in solving such problems. We first demonstrate that our methods reproduce the accuracy and performance of other neural operators published elsewhere in the literature to learn the 1D wave equation and the 1D Burgers equation. Thereafter, we apply our PINOs to learn new types of equations, including the 2D Burgers equation in the scalar, inviscid and vector types. Finally, we show that our approach is also applicable to learn the physics of the 2D linear and nonlinear shallow water equations, which involve three coupled PDEs. We release our artificial intelligence surrogates and scientific software to produce initial data and boundary conditions to study a broad range of physically motivated scenarios. We provide the
source code , an interactivewebsite to visualize the predictions of our PINOs, and a tutorial for their use at theData and Learning Hub for Science .