skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Uniform convergence rates for Lipschitz learning on graphs
Abstract Lipschitz learning is a graph-based semisupervised learning method where one extends labels from a labeled to an unlabeled data set by solving the infinity Laplace equation on a weighted graph. In this work we prove uniform convergence rates for solutions of the graph infinity Laplace equation as the number of vertices grows to infinity. Their continuum limits are absolutely minimizing Lipschitz extensions (AMLEs) with respect to the geodesic metric of the domain where the graph vertices are sampled from. We work under very general assumptions on the graph weights, the set of labeled vertices and the continuum domain. Our main contribution is that we obtain quantitative convergence rates even for very sparsely connected graphs, as they typically appear in applications like semisupervised learning. In particular, our framework allows for graph bandwidths down to the connectivity radius. For proving this we first show a quantitative convergence statement for graph distance functions to geodesic distance functions in the continuum. Using the ‘comparison with distance functions’ principle, we can pass these convergence statements to infinity harmonic functions and AMLEs.  more » « less
Award ID(s):
1944925
PAR ID:
10422618
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IMA Journal of Numerical Analysis
ISSN:
0272-4979
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract We consider dynamics driven by interaction energies on graphs. We introduce graph analogues of the continuum nonlocal-interaction equation and interpret them as gradient flows with respect to a graph Wasserstein distance. The particular Wasserstein distance we consider arises from the graph analogue of the Benamou–Brenier formulation where the graph continuity equation uses an upwind interpolation to define the density along the edges. While this approach has both theoretical and computational advantages, the resulting distance is only a quasi-metric. We investigate this quasi-metric both on graphs and on more general structures where the set of “vertices” is an arbitrary positive measure. We call the resulting gradient flow of the nonlocal-interaction energy the nonlocal nonlocal-interaction equation (NL $$^2$$ 2 IE). We develop the existence theory for the solutions of the NL $$^2$$ 2 IE as curves of maximal slope with respect to the upwind Wasserstein quasi-metric. Furthermore, we show that the solutions of the NL $$^2$$ 2 IE on graphs converge as the empirical measures of the set of vertices converge weakly, which establishes a valuable discrete-to-continuum convergence result. 
    more » « less
  2. We study local filters for the Lipschitz property of real-valued functions f : V → [0,r], where the Lipschitz property is defined with respect to an arbitrary undirected graph G = (V, E ). We give nearly optimal local Lipschitz filters both with respect to ℓ1-distance and ℓ0-distance. Previous work only considered unbounded- range functions over [n]d. Jha and Raskhodnikova (SICOMP ‘13) gave an algorithm for such functions with lookup complexity exponential in d, which Awasthi et al. (ACM Trans. Comput. Theory) showed was necessary in this setting. We demonstrate that important applications of local Lipschitz filters can be accomplished with filters for functions whose range is bounded in [0,r]. For functions f : [n]d → [0,r], we achieve running time (dr log n )O (log r ) for the ℓ1-respecting filter and dO(r) polylog n for the ℓ0-respecting filter, thus circumventing the lower bound. Our local filters provide a novel Lipschitz extension that can be implemented locally. Furthermore, we show that our algorithms are nearly optimal in terms of the dependence on r for the domain {0,1}d, an important special case of the domain [n]d. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing and a new technique for proving hardness for adaptive algorithms. Finally, we provide two applications of our local filters to real-valued functions, with no restrictions on the range. In the first application, we use them in conjunction with the Laplace mechanism for differential privacy and noisy binary search to provide mechanisms for privately releasing outputs of black-box functions, even in the presence of malicious clients. In particular, our differentially private mechanism for arbitrary real-valued functions runs in time 2polylog min(r,nd ) and, for honest clients, has accuracy comparable to the Laplace mechanism for Lipschitz functions, up to a factor of O (log min(r,nd )). In the second application, we use our local filters to obtain the first nontrivial tolerant tester for the Lipschitz property. Our tester works for functions of the form f : {0,1}d → ℝ, makes queries, and has tolerance ratio 2.01. Our applications demonstrate that local filters for bounded-range functions can be applied to construct efficient algorithms for arbitrary real-valued functions. 
    more » « less
  3. null (Ed.)
    Semi-supervised and unsupervised machine learning methods often rely on graphs to model data, prompting research on how theoretical properties of operators on graphs are leveraged in learning problems. While most of the existing literature focuses on undirected graphs, directed graphs are very important in practice, giving models for physical, biological or transportation networks, among many other applications. In this paper, we propose a new framework for rigorously studying continuum limits of learning algorithms on directed graphs. We use the new framework to study the PageRank algorithm and show how it can be interpreted as a numerical scheme on a directed graph involving a type of normalised graph Laplacian . We show that the corresponding continuum limit problem, which is taken as the number of webpages grows to infinity, is a second-order, possibly degenerate, elliptic equation that contains reaction, diffusion and advection terms. We prove that the numerical scheme is consistent and stable and compute explicit rates of convergence of the discrete solution to the solution of the continuum limit partial differential equation. We give applications to proving stability and asymptotic regularity of the PageRank vector. Finally, we illustrate our results with numerical experiments and explore an application to data depth. 
    more » « less
  4. null (Ed.)
    Weakly labeled data are inevitable in various research areas in artificial intelligence (AI) where one has a modicum of knowledge about the complete dataset. One of the reasons for weakly labeled data in AI is insufficient accurately labeled data. Strict privacy control or accidental loss may also cause missing-data problems. However, supervised machine learning (ML) requires accurately labeled data in order to successfully solve a problem. Data labeling is difficult and time-consuming as it requires manual work, perfect results, and sometimes human experts to be involved (e.g., medical labeled data). In contrast, unlabeled data are inexpensive and easily available. Due to there not being enough labeled training data, researchers sometimes only obtain one or few data points per category or label. Training a supervised ML model from the small set of labeled data is a challenging task. The objective of this research is to recover missing labels from the dataset using state-of-the-art ML techniques using a semisupervised ML approach. In this work, a novel convolutional neural network-based framework is trained with a few instances of a class to perform metric learning. The dataset is then converted into a graph signal, which is recovered using a recover algorithm (RA) in graph Fourier transform. The proposed approach was evaluated on a Fashion dataset for accuracy and precision and performed significantly better than graph neural networks and other state-of-the-art methods 
    more » « less
  5. Abstract Consider a set of n vertices, where each vertex has a location in $$\mathbb{R}^d$$ that is sampled uniformly from the unit cube in $$\mathbb{R}^d$$ , and a weight associated to it. Construct a random graph by placing edges independently for each vertex pair with a probability that is a function of the distance between the locations and the vertex weights. Under appropriate integrability assumptions on the edge probabilities that imply sparseness of the model, after appropriately blowing up the locations, we prove that the local limit of this random graph sequence is the (countably) infinite random graph on $$\mathbb{R}^d$$ with vertex locations given by a homogeneous Poisson point process, having weights which are independent and identically distributed copies of limiting vertex weights. Our set-up covers many sparse geometric random graph models from the literature, including geometric inhomogeneous random graphs (GIRGs), hyperbolic random graphs, continuum scale-free percolation, and weight-dependent random connection models. We prove that the limiting degree distribution is mixed Poisson and the typical degree sequence is uniformly integrable, and we obtain convergence results on various measures of clustering in our graphs as a consequence of local convergence. Finally, as a byproduct of our argument, we prove a doubly logarithmic lower bound on typical distances in this general setting. 
    more » « less