skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 12 until 2:00 AM ET on Friday, June 13 due to maintenance. We apologize for the inconvenience.


Title: SRNet: A spatial-relationship aware point-set classification method for multiplexed pathology images.
Point-set classification for multiplexed pathology images aims to distinguish between the spatial configurations of cells within multiplexed immuno-fluorescence (mIF) images of different diseases. This problem is important towards aiding pathologists in diag- nosing diseases (e.g., chronic pancreatitis and pancreatic ductal adenocarcinoma). This problem is challenging because crucial spa- tial relationships are implicit in point sets and the non-uniform distribution of points makes the relationships complex. Manual morphologic or cell-count based methods, the conventional clinical approach for studying spatial patterns within mIF images, is limited by inter-observer variability. The current deep neural network methods for point sets (e.g., PointNet) are limited in learning the representation of implicit spatial relationships between categorical points. To overcome the limitation, we propose a new deep neural network (DNN) architecture, namely spatial-relationship aware neural networks (SRNet), with a novel design of representation learning layers. Experimental results with a University of Michigan mIF dataset show that the proposed method significantly outperforms the competing DNN methods, by 80%, reaching 95% accuracy.  more » « less
Award ID(s):
1737633
PAR ID:
10350530
Author(s) / Creator(s):
Date Published:
Journal Name:
In Proceedings of DeepSpatial’21: 2nd ACM SIGKDD Workshop on Deep Learning for Spatiotemporal Data, Applications, and Systems
Volume:
10
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The modeling of the brain as a three-dimensional spatial object, similar to a geographical landscape, has the paved way for the successful application of Kriging methods in solving the seizure detection problem with good performance but in cubic computational time complexity. The Deep Neural Network (DNN) has been widely used for seizure detection due to its effectiveness in classification tasks, although at the cost of a protracted training time. While Kriging exploits the spatial correlation between data locations, DNN relies on its capacity to learn intrinsic representations within the dataset from the basest unit parts. This paper presents a Distributed Kriging-Bootstrapped Deep Neural Network (DNN) model as a twofold solution for fast and accurate seizure detection using brain signals collected with the electroencephalogram (EEG) from healthy subjects and patients of epilepsy. The proposed model parallelizes the Kriging computation into different cores in a machine and then produces a strongly correlated, unified quasi-output data which serves as an input to the Deep Neural Network. Experimental results validate the proposed model as superior to conventional Kriging methods and DNN by training in 91% less time than the basic DNN and about three times as fast as the ordinary Kriging-Bootstrapped Deep Neural Network (DNN) model while maintaining good performance in terms of sensitivity, specificity and testing accuracy compared to other models and existing works. 
    more » « less
  2. Given earth imagery with spectral features on a terrain surface, this paper studies surface segmentation based on both explanatory features and surface topology. The problem is important in many spatial and spatiotemporal applications such as flood extent mapping in hydrology. The problem is uniquely challenging for several reasons: first, the size of earth imagery on a terrain surface is often much larger than the input of popular deep convolutional neural networks; second, there exists topological structure dependency between pixel classes on the surface, and such dependency can follow an unknown and non-linear distribution; third, there are often limited training labels. Existing methods for earth imagery segmentation often divide the imagery into patches and consider the elevation as an additional feature channel. These methods do not fully incorporate the spatial topological structural constraint within and across surface patches and thus often show poor results, especially when training labels are limited. Existing methods on semi-supervised and unsupervised learning for earth imagery often focus on learning representation without explicitly incorporating surface topology. In contrast, we propose a novel framework that explicitly models the topological skeleton of a terrain surface with a contour tree from computational topology, which is guided by the physical constraint (e.g., water flow direction on terrains). Our framework consists of two neural networks: a convolutional neural network (CNN) to learn spatial contextual features on a 2D image grid, and a graph neural network (GNN) to learn the statistical distribution of physics-guided spatial topological dependency on the contour tree. The two models are co-trained via variational EM. Evaluations on the real-world flood mapping datasets show that the proposed models outperform baseline methods in classification accuracy, especially when training labels are limited. 
    more » « less
  3. Deep neural networks (DNNs) have gained considerable attention in various real-world applications due to the strong performance on representation learning. However, a DNN needs to be trained many epochs for pursuing a higher inference accuracy, which requires storing sequential versions of DNNs and releasing the updated versions to users. As a result, large amounts of storage and network resources are required, which significantly hamper DNN utilization on resource-constrained platforms (e.g., IoT, mobile phone). In this paper, we present a novel delta compression framework called Delta-DNN, which can efficiently compress the float-point numbers in DNNs by exploiting the floats similarity existing in DNNs during training. Specifically, (1) we observe the high similarity of float-point numbers between the neighboring versions of a neural network in training; (2) inspired by delta compression technique, we only record the delta (i.e., the differences) between two neighboring versions, instead of storing the full new version for DNNs; (3) we use the error-bounded lossy compression to compress the delta data for a high compression ratio, where the error bound is strictly assessed by an acceptable loss of DNNs’ inference accuracy; (4) we evaluate Delta-DNN’s performance on two scenarios, including reducing the transmission of releasing DNNs over network and saving the storage space occupied by multiple versions of DNNs. According to experimental results on six popular DNNs, DeltaDNN achieves the compression ratio 2x~10x higher than state-ofthe-art methods, while without sacrificing inference accuracy and changing the neural network structure. 
    more » « less
  4. null (Ed.)
    Non-Rigid Structure from Motion (NRSfM) refers to the problem of reconstructing cameras and the 3D point cloud of a non-rigid object from an ensemble of images with 2D correspondences. Current NRSfM algorithms are limited from two perspectives: (i) the number of images, and (ii) the type of shape variability they can handle. These difficulties stem from the inherent conflict between the condition of the system and the degrees of freedom needing to be modeled – which has hampered its practical utility for many applications within vision. In this paper we propose a novel hierarchical sparse coding model for NRSFM which can overcome (i) and (ii) to such an extent, that NRSFM can be applied to problems in vision previously thought too ill posed. Our approach is realized in practice as the training of an unsupervised deep neural network (DNN) auto-encoder with a unique architecture that is able to disentangle pose from 3D structure. Using modern deep learning computational platforms allows us to solve NRSfM problems at an unprecedented scale and shape complexity. Our approach has no 3D supervision, relying solely on 2D point correspondences. Further, our approach is also able to handle missing/occluded 2D points without the need for matrix completion. Extensive experiments demonstrate the impressive performance of our approach where we exhibit superior precision and robustness against all available state-of-the-art works in some instances by an order of magnitude. We further propose a new quality measure (based on the network weights) which circumvents the need for 3D ground-truth to ascertain the confidence we have in the reconstructability. We believe our work to be a significant advance over state of-the-art in NRSFM. 
    more » « less
  5. null (Ed.)
    Text classification is a fundamental problem, and recently, deep neural networks (DNN) have shown promising results in many natural language tasks. However, their human-level performance relies on high-quality annotations, which are time-consuming and expensive to collect. As we move towards large inexpensive datasets, the inherent label noise degrades the generalization of DNN. While most machine learning literature focuses on building complex networks to handle noise, in this work, we evaluate model-agnostic methods to handle inherent noise in large scale text classification that can be easily incorporated into existing machine learning workflows with minimal interruption. Specifically, we conduct a point-by-point comparative study between several noise-robust methods on three datasets encompassing three popular classification models. To our knowledge, this is the first time such a comprehensive study in text classification encircling popular models and model-agnostic loss methods has been conducted. In this study, we describe our learning and demonstrate the application of our approach, which outperformed baselines by up to 10% in classification accuracy while requiring no network modifications. 
    more » « less