In this work, we present a multi-view framework to classify spatio-temporal phenomena at multiple resolutions. This approach utilizes the complementarity of features across different resolutions and improves the corresponding models by enforcing consistency of their predictions on unlabeled data. Unlike traditional multi-view learning problems, the key challenge in our case is that there is a many-to-one correspondence between instances across different resolutions, which needs to be explicitly modeled. Experiments on the real-world application of mapping urban areas using spatial raster datasets from satellite observations show the benefits of the proposed multi-view framework.
more »
« less
Semi-supervised Classification using Attention-based Regularization on Coarse-resolution Data
fine resolutions but available training data is scarce. In this paper, we propose classification algorithms that leverage supervision from coarser resolutions to help train models on finer resolutions. The different resolutions are modeled as different views of the data in a multi-view framework that exploits the complementarity of features across different views to improve models on both views. Unlike traditional multi-view learning problems, the key challenge in our case is that there is no one-to-one correspondence between instances across different views in our case, which requires explicit modeling of the correspondence of instances across resolutions. We propose to use the features of instances at different resolutions to learn the correspondence between instances across resolutions using attention mechanism. Experiments on the real-world application of mapping urban areas using satellite observations and sentiment classification on text data shows the effectiveness of the proposed methods.
more »
« less
- Award ID(s):
- 1838159
- PAR ID:
- 10198728
- Date Published:
- Journal Name:
- Proceedings of the 2020 SIAM International Conference on Data Mining
- Page Range / eLocation ID:
- 253 to 261
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Data from many real-world applications can be naturally represented by multi-view networks where the different views encode different types of relationships (e.g., friendship, shared interests in music, etc.) between real-world individuals or entities. There is an urgent need for methods to obtain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks. However, most of the work on multi-view learning focuses on data that lack a network structure, and most of the work on network embeddings has focused primarily on single-view networks. Against this background, we consider the multi-view network representation learning problem, i.e., the problem of constructing low-dimensional information preserving embeddings of multi-view networks. Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the information from the individual network views, while accounting for connectivity across (and hence complementarity of and correlations between) different views. The results of our experiments on two real-world multi-view data sets show that the embeddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.more » « less
-
Data from many real-world applications can be nat- urally represented by multi-view networks where the different views encode different types of rela- tionships (e.g., friendship, shared interests in mu- sic, etc.) between real-world individuals or enti- ties. There is an urgent need for methods to ob- tain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks. However, most of the work on multi- view learning focuses on data that lack a net- work structure, and most of the work on net- work embeddings has focused primarily on single- view networks. Against this background, we con- sider the multi-view network representation learn- ing problem, i.e., the problem of constructing low- dimensional information preserving embeddings of multi-view networks. Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the informa- tion from the individual network views, while ac- counting for connectivity across (and hence com- plementarity of and correlations between) differ- ent views. The results of our experiments on two real-world multi-view data sets show that the em- beddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.more » « less
-
Abstract Multi‐view data, which is matched sets of measurements on the same subjects, have become increasingly common with advances in multi‐omics technology. Often, it is of interest to find associations between the views that are related to the intrinsic class memberships. Existing association methods cannot directly incorporate class information, while existing classification methods do not take into account between‐views associations. In this work, we propose a framework for Joint Association and Classification Analysis of multi‐view data (JACA). Our goal is not to merely improve the misclassification rates, but to provide a latent representation of high‐dimensional data that is both relevant for the subtype discrimination and coherent across the views. We motivate the methodology by establishing a connection between canonical correlation analysis and discriminant analysis. We also establish the estimation consistency of JACA in high‐dimensional settings. A distinct advantage of JACA is that it can be applied to the multi‐view data with block‐missing structure, that is to cases where a subset of views or class labels is missing for some subjects. The application of JACA to quantify the associations between RNAseq and miRNA views with respect to consensus molecular subtypes in colorectal cancer data from The Cancer Genome Atlas project leads to improved misclassification rates and stronger found associations compared to existing methods.more » « less
-
null (Ed.)The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method.more » « less
An official website of the United States government

