Deep learning has made great strides in medical imaging, enabled by hardware advances in GPUs. One major constraint for the development of new models has been the saturation of GPU memory resources during training. This is especially true in computational pathology, where images regularly contain more than 1 billion pixels. These pathological images are traditionally divided into small patches to enable deep learning due to hardware limitations. In this work, we explore whether the shared GPU/CPU memory architecture on the M1 Ultra systems-on-a-chip (SoCs) recently released by Apple, Inc. may provide a solution. These affordable systems (less than $5000) provide access to 128 GB of unified memory (Mac Studio with M1 Ultra SoC). As a proof of concept for gigapixel deep learning, we identified tissue from background on gigapixel areas from whole slide images (WSIs). The model was a modified U-Net (4492 parameters) leveraging large kernels and high stride. The M1 Ultra SoC was able to train the model directly on gigapixel images (16000×64000 pixels, 1.024 billion pixels) with a batch size of 1 using over 100 GB of unified memory for the process at an average speed of 1 minute and 21 seconds per batch with Tensorflow 2/Keras. As expected, the model converged with a high Dice score of 0.989 ± 0.005. Training up until this point took 111 hours and 24 minutes over 4940 steps. Other high RAM GPUs like the NVIDIA A100 (largest commercially accessible at 80 GB, ∼$15000) are not yet widely available (in preview for select regions on Amazon Web Services at $40.96/hour as a group of 8). This study is a promising step towards WSI-wise end-to-end deep learning with prevalent network architectures.
more »
« less
Adaptive Compositing and Navigation of Variable Resolution Images
Abstract We present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.
more »
« less
- Award ID(s):
- 1664848
- PAR ID:
- 10453525
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 40
- Issue:
- 1
- ISSN:
- 0167-7055
- Page Range / eLocation ID:
- p. 138-150
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We describe a case study to use the Montage image mosaic engine to create maps of the ALLWISE image data set in the Hierarchical Progressive Survey (HiPS) sky-tesselation scheme. Our approach demonstrates that Montage reveals the science content of infrared images in greater detail than has hitherto been possible in HiPS maps. The approach exploits two unique (to our knowledge) characteristics of the Montage image mosaic engine: background modeling to rectify the time variable image backgrounds to common levels; and an adaptive image stretch to present images for visualization. The creation of the maps is supported by the development of four new tools that when fully tested will become part of the Montage distribution. The compute intensive part of the processing lies in the reprojection of the images, and we show how we optimized the processing for efficient creation of mosaics that are used in turn to create maps in the HiPS tiling scheme. We plan to apply our methodology to infrared image data sets such a those delivered by Spitzer, 2MASS, IRAS and Planck.more » « less
-
Dense time-series remote sensing data with detailed spatial information are highly desired for the monitoring of dynamic earth systems. Due to the sensor tradeoff, most remote sensing systems cannot provide images with both high spatial and temporal resolutions. Spatiotemporal image fusion models provide a feasible solution to generate such a type of satellite imagery, yet existing fusion methods are limited in predicting rapid and/or transient phenological changes. Additionally, a systematic approach to assessing and understanding how varying levels of temporal phenological changes affect fusion results is lacking in spatiotemporal fusion research. The objective of this study is to develop an innovative hybrid deep learning model that can effectively and robustly fuse the satellite imagery of various spatial and temporal resolutions. The proposed model integrates two types of network models: super-resolution convolutional neural network (SRCNN) and long short-term memory (LSTM). SRCNN can enhance the coarse images by restoring degraded spatial details, while LSTM can learn and extract the temporal changing patterns from the time-series images. To systematically assess the effects of varying levels of phenological changes, we identify image phenological transition dates and design three temporal phenological change scenarios representing rapid, moderate, and minimal phenological changes. The hybrid deep learning model, alongside three benchmark fusion models, is assessed in different scenarios of phenological changes. Results indicate the hybrid deep learning model yields significantly better results when rapid or moderate phenological changes are present. It holds great potential in generating high-quality time-series datasets of both high spatial and temporal resolutions, which can further benefit terrestrial system dynamic studies. The innovative approach to understanding phenological changes’ effect will help us better comprehend the strengths and weaknesses of current and future fusion models.more » « less
-
The increasing demand for larger and higher fidelity simulations has made Adaptive Mesh Refinement (AMR) and unstructured mesh techniques essential to focus compute effort and memory cost on just the areas of interest in the simulation domain. The distribution of these meshes over the compute nodes is often determined by balancing compute, memory, and network costs, leading to distributions with jagged nonconvex boundaries that fit together much like puzzle pieces. It is expensive, and sometimes impossible, to re-partition the data posing a challenge for in situ and post hoc visualization as the data cannot be rendered using standard sort-last compositing techniques that require a convex and disjoint data partitioning. We present a new distributed volume rendering and compositing algorithm, Approximate Puzzlepiece Compositing, that enables fast and high-accuracy in-place rendering of AMR and unstructured meshes. Our approach builds on Moment-Based Ordered-Independent Transparency to achieve a scalable, order-independent compositing algorithm that requires little communication and does not impose requirements on the data partitioning. We evaluate the image quality and scalability of our approach on synthetic data and two large-scale unstructured meshes on HPC systems by comparing to state-of-the-art sort-last compositing techniques, highlighting our approach's minimal overhead at higher core counts. We demonstrate that Approximate Puzzlepiece Compositing provides a scalable, high-performance, and high-qualitymore » « less
-
Landscape analyses are typically done using spatially explicit color aerial imagery. However, working with non-spatial black and white historical aerial photographs presents several challenges that require a combination of techniques and approaches. We analyzed 113 aerial images covering approx. 700 km2 (270 mi2) including all of Baltimore City, and a portion of Baltimore County surrounding the City. The images were taken between August 23rd 1952 and February 14th 1953. High-resolution scans were georeferenced and georectified against modern satellite imagery of the area and then combined to create a single raster mosaic. This process converted the images from a disparate set of photographs into a spatially explicit GIS data set that can be used to observe changes in land patches over time—and ultimately integrated with other long-term social, economic, and ecological data.more » « less
An official website of the United States government
