skip to main content


Title: Adaptive Compositing and Navigation of Variable Resolution Images
Abstract

We present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.

 
more » « less
Award ID(s):
1664848
NSF-PAR ID:
10453525
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
40
Issue:
1
ISSN:
0167-7055
Page Range / eLocation ID:
p. 138-150
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Advances in imagery at atomic and near-atomic resolution, such as cryogenic electron microscopy (cryo-EM), have led to an influx of high resolution images of proteins and other macromolecular structures to data banks worldwide. Producing a protein structure from the discrete voxel grid data of cryo-EM maps involves interpolation into the continuous spatial domain. We present a novel data format called the neural cryo-EM map, which is formed from a set of neural networks that accurately parameterize cryo-EM maps and provide native, spatially continuous data for density and gradient. As a case study of this data format, we create graph-based interpretations of high resolution experimental cryo-EM maps.

    Results

    Normalized cryo-EM map values interpolated using the non-linear neural cryo-EM format are more accurate, consistently scoring less than 0.01 mean absolute error, than a conventional tri-linear interpolation, which scores up to 0.12 mean absolute error. Our graph-based interpretations of 115 experimental cryo-EM maps from 1.15 to 4.0 Å resolution provide high coverage of the underlying amino acid residue locations, while accuracy of nodes is correlated with resolution. The nodes of graphs created from atomic resolution maps (higher than 1.6 Å) provide greater than 99% residue coverage as well as 85% full atomic coverage with a mean of 0.19 Å root mean squared deviation. Other graphs have a mean 84% residue coverage with less specificity of the nodes due to experimental noise and differences of density context at lower resolutions.

    Conclusions

    The fully continuous and differentiable nature of the neural cryo-EM map enables the adaptation of the voxel data to alternative data formats, such as a graph that characterizes the atomic locations of the underlying protein or macromolecular structure. Graphs created from atomic resolution maps are superior in finding atom locations and may serve as input to predictive residue classification and structure segmentation methods. This work may be generalized to transform any 3D grid-based data format into non-linear, continuous, and differentiable format for downstream geometric deep learning applications.

     
    more » « less
  2. Deep learning has made great strides in medical imaging, enabled by hardware advances in GPUs. One major constraint for the development of new models has been the saturation of GPU memory resources during training. This is especially true in computational pathology, where images regularly contain more than 1 billion pixels. These pathological images are traditionally divided into small patches to enable deep learning due to hardware limitations. In this work, we explore whether the shared GPU/CPU memory architecture on the M1 Ultra systems-on-a-chip (SoCs) recently released by Apple, Inc. may provide a solution. These affordable systems (less than $5000) provide access to 128 GB of unified memory (Mac Studio with M1 Ultra SoC). As a proof of concept for gigapixel deep learning, we identified tissue from background on gigapixel areas from whole slide images (WSIs). The model was a modified U-Net (4492 parameters) leveraging large kernels and high stride. The M1 Ultra SoC was able to train the model directly on gigapixel images (16000×64000 pixels, 1.024 billion pixels) with a batch size of 1 using over 100 GB of unified memory for the process at an average speed of 1 minute and 21 seconds per batch with Tensorflow 2/Keras. As expected, the model converged with a high Dice score of 0.989 ± 0.005. Training up until this point took 111 hours and 24 minutes over 4940 steps. Other high RAM GPUs like the NVIDIA A100 (largest commercially accessible at 80 GB, ∼$15000) are not yet widely available (in preview for select regions on Amazon Web Services at $40.96/hour as a group of 8). This study is a promising step towards WSI-wise end-to-end deep learning with prevalent network architectures. 
    more » « less
  3. Dense time-series remote sensing data with detailed spatial information are highly desired for the monitoring of dynamic earth systems. Due to the sensor tradeoff, most remote sensing systems cannot provide images with both high spatial and temporal resolutions. Spatiotemporal image fusion models provide a feasible solution to generate such a type of satellite imagery, yet existing fusion methods are limited in predicting rapid and/or transient phenological changes. Additionally, a systematic approach to assessing and understanding how varying levels of temporal phenological changes affect fusion results is lacking in spatiotemporal fusion research. The objective of this study is to develop an innovative hybrid deep learning model that can effectively and robustly fuse the satellite imagery of various spatial and temporal resolutions. The proposed model integrates two types of network models: super-resolution convolutional neural network (SRCNN) and long short-term memory (LSTM). SRCNN can enhance the coarse images by restoring degraded spatial details, while LSTM can learn and extract the temporal changing patterns from the time-series images. To systematically assess the effects of varying levels of phenological changes, we identify image phenological transition dates and design three temporal phenological change scenarios representing rapid, moderate, and minimal phenological changes. The hybrid deep learning model, alongside three benchmark fusion models, is assessed in different scenarios of phenological changes. Results indicate the hybrid deep learning model yields significantly better results when rapid or moderate phenological changes are present. It holds great potential in generating high-quality time-series datasets of both high spatial and temporal resolutions, which can further benefit terrestrial system dynamic studies. The innovative approach to understanding phenological changes’ effect will help us better comprehend the strengths and weaknesses of current and future fusion models. 
    more » « less
  4. We describe a case study to use the Montage image mosaic engine to create maps of the ALLWISE image data set in the Hierarchical Progressive Survey (HiPS) sky-tesselation scheme. Our approach demonstrates that Montage reveals the science content of infrared images in greater detail than has hitherto been possible in HiPS maps. The approach exploits two unique (to our knowledge) characteristics of the Montage image mosaic engine: background modeling to rectify the time variable image backgrounds to common levels; and an adaptive image stretch to present images for visualization. The creation of the maps is supported by the development of four new tools that when fully tested will become part of the Montage distribution. The compute intensive part of the processing lies in the reprojection of the images, and we show how we optimized the processing for efficient creation of mosaics that are used in turn to create maps in the HiPS tiling scheme. We plan to apply our methodology to infrared image data sets such a those delivered by Spitzer, 2MASS, IRAS and Planck. 
    more » « less
  5. Abstract

    Morphological processes often induce meter‐scale elevation changes. When a volcano erupts, tracking such processes provides insights into the style and evolution of eruptive activity and related hazards. Compared to optical remote‐sensing products, synthetic aperture radar (SAR) observes surface change during inclement weather and at night. Differential SAR interferometry estimates phase change between SAR acquisitions and is commonly applied to quantify deformation. However, large deformation or other coherence loss can limit its use. We develop a new approach applicable when repeated digital elevation models (DEMs) cannot be otherwise retrieved. Assuming an isotropic radar cross‐section, we estimate meter‐scale vertical morphological change directly from SAR amplitude images via an optimization method that utilizes a high‐quality DEM. We verify our implementation through simulation of a collapse feature that we modulate onto topography. We simulate radar effects and recover the simulated collapse. To validate our method, we estimate elevation changes from TerraSAR‐X stripmap images for the 2011–2012 eruption of Mount Cleveland. Our results reproduce those from two previous studies; one that used the same dataset, and another based on thermal satellite data. By applying this method to the 2019–2020 eruption of Shishaldin Volcano, Alaska, we generate elevation change time series from dozens of co‐registered TerraSAR‐X high‐resolution spotlight images. Our results quantify previously unresolved cone growth in November 2019, collapses associated with explosions in December–January, and further changes in crater elevations into spring 2020. This method can be used to track meter‐scale morphology changes for ongoing eruptions with low latency as SAR imagery becomes available.

     
    more » « less