skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 19, 2026

Title: A spike-and-slab prior for dimension selection in generalized linear network eigenmodels
Abstract Latent space models are often used to model network data by embedding a network’s nodes into a low-dimensional latent space; however, choosing the dimension of this space remains a challenge. To this end, we begin by formalizing a class of latent space models we call generalized linear network eigenmodels that can model various edge types (binary, ordinal, nonnegative continuous) found in scientific applications. This model class subsumes the traditional eigenmodel by embedding it in a generalized linear model with an exponential dispersion family random component and fixes identifiability issues that hindered interpretability. We propose a Bayesian approach to dimension selection for generalized linear network eigenmodels based on an ordered spike-and-slab prior that provides improved dimension estimation and satisfies several appealing theoretical properties. We show that the model’s posterior is consistent and concentrates on low-dimensional models near the truth. We demonstrate our approach’s consistent dimension selection on simulated networks, and we use generalized linear network eigenmodels to study the effect of covariates on the formation of networks from biology, ecology, and economics and the existence of residual latent structure.  more » « less
Award ID(s):
2345043
PAR ID:
10590199
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Biometrika
ISSN:
0006-3444
Subject(s) / Keyword(s):
Generalized linear model Model selection Network latent space model Statistical network analysis
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Set representation has become ubiquitous in deep learning for modeling the inductive bias of neural networks that are insensitive to the input order. DeepSets is the most widely used neural network architecture for set representation. It involves embedding each set element into a latent space with dimension L, followed by a sum pooling to obtain a whole-set embedding, and finally mapping the whole-set embedding to the output. In this work, we investigate the impact of the dimension L on the expressive power of DeepSets. Previous analyses either oversimplified high-dimensional features to be one-dimensional features or were limited to analytic activations, thereby diverging from practical use or resulting in L that grows exponentially with the set size N and feature dimension D. To investigate the minimal value of L that achieves sufficient expressive power, we present two set-element embedding layers: (a) linear + power activation (LP) and (b) linear + exponential activations (LE). We demonstrate that L being poly(N,D) is sufficient for set representation using both embedding layers. We also provide a lower bound of L for the LP embedding layer. Furthermore, we extend our results to permutation-equivariant set functions and the complex field. 
    more » « less
  2. It is currently known how to characterize functions that neural networks can learn with SGD for two extremal parametrizations: neural networks in the linear regime, and neural networks with no structural constraints. However, for the main parametrization of interest —non-linear but regular networks— no tight characterization has yet been achieved, despite significant developments. We take a step in this direction by considering depth-2 neural networks trained by SGD in the mean-field regime. We consider functions on binary inputs that depend on a latent low-dimensional subspace (i.e., small number of coordinates). This regime is of interest since it is poorly under- stood how neural networks routinely tackle high-dimensional datasets and adapt to latent low- dimensional structure without suffering from the curse of dimensionality. Accordingly, we study SGD-learnability with O(d) sample complexity in a large ambient dimension d. Our main results characterize a hierarchical property —the merged-staircase property— that is both necessary and nearly sufficient for learning in this setting. We further show that non-linear training is necessary: for this class of functions, linear methods on any feature map (e.g., the NTK) are not capable of learning efficiently. The key tools are a new “dimension-free” dynamics approximation result that applies to functions defined on a latent space of low-dimension, a proof of global convergence based on polynomial identity testing, and an improvement of lower bounds against linear methods for non-almost orthogonal functions. 
    more » « less
  3. Generative models based on latent variables, such as generative adversarial networks (GANs) and variationalauto-encoders (VAEs), have gained lots of interests due to their impressive performance in many fields.However, many data such as natural images usually do not populate the ambient Euclidean space but insteadreside in a lower-dimensional manifold. Thus an inappropriate choice of the latent dimension fails to uncoverthe structure of the data, possibly resulting in mismatch of latent representations and poor generativequalities. Toward addressing these problems, we propose a novel framework called the latent WassersteinGAN (LWGAN) that fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsicdimension of the data manifold can be adaptively learned by a modified informative latent distribution. Weprove that there exist an encoder network and a generator network in such a way that the intrinsic dimensionof the learned encoding distribution is equal to the dimension of the data manifold. We theoreticallyestablish that our estimated intrinsic dimension is a consistent estimate of the true dimension of the datamanifold. Meanwhile, we provide an upper bound on the generalization error of LWGAN, implying that weforce the synthetic data distribution to be similar to the real data distribution from a population perspective.Comprehensive empirical experiments verify our framework and show that LWGAN is able to identify thecorrect intrinsic dimension under several scenarios, and simultaneously generate high-quality syntheticdata by sampling from the learned latent distribution. Supplementary materials for this article are availableonline, including a standardized description of the materials available for reproducing the work. 
    more » « less
  4. A very popular class of models for networks posits that each node is represented by a point in a continuous latent space, and that the probability of an edge between nodes is a decreasing function of the distance between them in this latent space. We study the embedding problem for these models, of recovering the latent positions from the observed graph. Assuming certain natural symmetry and smoothness properties, we establish the uniform convergence of the log-likelihood of latent positions as the number of nodes grows. A consequence is that the maximum likelihood embedding converges on the true positions in a certain information-theoretic sense. Extensions of these results, to recovering distributions in the latent space, and so distributions over arbitrarily large graphs, will be treated in the sequel. 
    more » « less
  5. null (Ed.)
    Multiple networks emerge in a wealth of high-impact applications. Network alignment, which aims to find the node correspondence across different networks, plays a fundamental role for many data mining tasks. Most of the existing methods can be divided into two categories: (1) consistency optimization based methods, which often explicitly assume the alignment to be consistent in terms of neighborhood topology and attribute across networks, and (2) network embedding based methods which learn low-dimensional node embedding vectors to infer alignment. In this paper, by analyzing certain methods of these two categories, we show that (1) the consistency optimization based methods are essentially specific random walk propagations from anchor links that might be restrictive; (2) the embedding based methods no longer explicitly assume alignment consistency but inevitably suffer from the space disparity issue. To overcome these two limitations, we bridge these methods and propose a novel family of network alignment algorithms BRIGHT to handle both non-attributed and attributed networks. Specifically, it constructs a space by random walk with restart (RWR) whose bases are one-hot encoding vectors of anchor nodes, followed by a shared linear layer. Our experiments on real-world networks show that the proposed family of algorithms BRIGHT outperform the state-of-the- arts for both non-attributed and attributed network alignment tasks. 
    more » « less