skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mateos, Gonzalo"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Graphs serve as generic tools to encode the underlying relational structure of data. Often this graph is not given, and so the task of inferring it from nodal observations becomes important. Traditional approaches formulate a convex inverse problem with a smoothness promoting objective and rely on iterative methods to obtain a solution. In supervised settings where graph labels are available, one can unroll and truncate these iterations into a deep network that is trained end-to-end. Such a network is parameter efficient and inherits inductive bias from the optimization formulation, an appealing aspect for data constrained settings in, e.g., medicine, finance, and the natural sciences. But typically such settings care equally about \textit{uncertainty} over edge predictions, not just point estimates. Here we introduce novel iterations with independently interpretable parameters, i.e., parameters whose values - independent of other parameters' settings - proportionally influence characteristics of the estimated graph, such as edge sparsity. After unrolling these iterations, prior knowledge over such graph characteristics shape prior distributions} over these independently interpretable network parameters to yield a Bayesian neural network (BNN) capable of graph structure learning (GSL) from smooth signal observations. Fast execution and parameter efficiency allow for high-fidelity posterior approximation via Markov Chain Monte Carlo (MCMC) and thus uncertainty quantification on edge predictions. Informative priors unlock modeling tools from Bayesian statistics like prior predictive checks. Synthetic and real data experiments corroborate this model's ability to provide well-calibrated estimates of uncertainty, in test cases that include unveiling economic sector modular structure from S&P500 data and recovering pairwise digit similarities from MNIST images. Overall, this framework enables GSL in modest-scale applications where uncertainty on the data structure is paramount 
    more » « less
    Free, publicly-accessible full text available September 1, 2025
  2. We investigate online network topology identification from smooth nodal observations acquired in a streaming fashion. Different from non-adaptive batch solutions, our distinctive goal is to track the (possibly) dynamic adjacency matrix with affordable memory and computational costs by processing signal snapshots online. To this end, we leverage and truncate dual-based proximal gradient (DPG) iterations to solve a composite smoothness-regularized, time-varying inverse problem. Numerical tests with synthetic and real electrocorticography data showcase the effectiveness of the novel lightweight iterations when it comes to tracking slowly-varying network connectivity. We also show that the online DPG algorithm converges faster than a primal-based baseline of comparable complexity. Aligned with reproducible research practices, we share the code developed to produce all figures included in this paper. 
    more » « less
  3. Machine learning frameworks such as graph neural networks typically rely on a given, fixed graph to exploit relational inductive biases and thus effectively learn from network data. However, when said graphs are (partially) unobserved, noisy, or dynamic, the problem of inferring graph structure from data becomes relevant. In this paper, we postulate a graph convolutional relationship between the observed and latent graphs, and formulate the graph structure learning task as a network inverse (deconvolution) problem. In lieu of eigendecomposition-based spectral methods or iterative optimization solutions, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive as well as node permutation equivariant. We corroborate GDN’s superior graph learning performance and its generalization to larger graphs using synthetic data in supervised settings. Moreover, we demonstrate the robustness and representation power of GDNs on real world neuroimaging and social network datasets. 
    more » « less
  4. Advances in graph signal processing for network neuroscience offer a unique pathway to integrate brain structure and function, with the goal of revealing some of the brain's organizing principles at the system level. In this direction, we develop a supervised graph representation learning framework to model the relationship between brain structural connectivity (SC) and functional connectivity (FC) via a graph encoder-decoder system. Specifically, we propose a Siamese network architecture equipped with graph convolutional encoders to learn graph (i.e., subject)-level embeddings that preserve application-dependent similarity measures between brain networks. This way, we effectively increase the number of training samples and bring in the flexibility to incorporate additional prior information via the prescribed target graph-level distance. While information on the brain structure-function coupling is implicitly distilled via reconstruction of brain FC from SC, our model also manages to learn representations that preserve the similarity between input graphs. The superior discriminative power of the learnt representations is demonstrated in downstream tasks including subject classification and visualization. All in all, this work advocates the prospect of leveraging learnt graph-level, similarity-preserving embeddings for brain network analysis, by bringing to bear standard tools of metric data analysis. 
    more » « less
  5. We propose a deep learning solution to the inverse problem of localizing sources of network diffusion. Invoking graph signal processing (GSP) fundamentals, the problem boils down to blind estimation of a diffusion filter and its sparse input signal encoding the source locations. While the observations are bilinear functions of the unknowns, a mild requirement on invertibility of the graph filter enables a convex reformulation that we solve via the alternating-direction method of multipliers (ADMM). We unroll and truncate the novel ADMM iterations, to arrive at a parameterized neural network architecture for Source Localization on Graphs (SLoG-Net), that we train in an end-to-end fashion using labeled data. This way we leverage inductive biases of a GSP model-based solution in a data-driven trainable parametric architecture, which is interpretable, parameter efficient, and offers controllable complexity during inference. Experiments with simulated data corroborate that SLoG-Net exhibits performance in par with the iterative ADMM baseline, while attaining significant (post-training) speedups. 
    more » « less
  6. Oh, Alice; Agarwal, Alekh; Belgrave, Danielle; Cho, Kyunghyun (Ed.)
    Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning. Principal component analysis (PCA) involves the projection of data on the eigenspace of the covariance matrix and draws similarities with the graph convolutional filters in GNNs. Motivated by this observation, we study a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs. We theoretically establish the stability of VNNs to perturbations in the covariance matrix, thus, implying an advantage over standard PCA-based data analysis approaches that are prone to instability due to principal components associated with close eigenvalues. Our experiments on real-world datasets validate our theoretical results and show that VNN performance is indeed more stable than PCA-based statistical approaches. Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches. 
    more » « less