skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kumar, S"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We propose a novel statistical inference framework for streaming principal component analysis (PCA) using Oja's algorithm, enabling the construction of confidence intervals for individual entries of the estimated eigenvector. Most existing works on streaming PCA focus on providing sharp sin-squared error guarantees. Recently, there has been some interest in uncertainty quantification for the sin-squared error. However, uncertainty quantification or sharp error guarantees for entries of the estimated eigenvector in the streaming setting remains largely unexplored. We derive a sharp Bernstein-type concentration bound for elements of the estimated vector matching the optimal error rate up to logarithmic factors. We also establish a Central Limit Theorem for a suitably centered and scaled subset of the entries. To efficiently estimate the coordinate-wise variance, we introduce a provably consistent subsampling algorithm that leverages the median-of-means approach, empirically achieving similar accuracy to multiplier bootstrap methods while being significantly more computationally efficient. Numerical experiments demonstrate its effectiveness in providing reliable uncertainty estimates with a fraction of the computational cost of existing methods. 
    more » « less
    Free, publicly-accessible full text available June 14, 2026
  2. Estimating the geometric median of a dataset is a robust counterpart to mean estimation, and is a fundamental problem in computational geometry. Recently, [HSU24] gave an (ε,δ)-differentially private algorithm obtaining an α-multiplicative approximation to the geometric median objective, 1n∑i∈[n]‖⋅−xi‖, given a dataset :={xi}i∈[n]⊂ℝd. Their algorithm requires n≳d‾‾√⋅1αε samples, which they prove is information-theoretically optimal. This result is surprising because its error scales with the \emph{effective radius} of  (i.e., of a ball capturing most points), rather than the worst-case radius. We give an improved algorithm that obtains the same approximation quality, also using n≳d‾‾√⋅1αϵ samples, but in time O˜(nd+dα2). Our runtime is nearly-linear, plus the cost of the cheapest non-private first-order method due to [CLM+16]. To achieve our results, we use subsampling and geometric aggregation tools inspired by FriendlyCore [TCK+22] to speed up the "warm start" component of the [HSU24] algorithm, combined with a careful custom analysis of DP-SGD's sensitivity for the geometric median objective. 
    more » « less
    Free, publicly-accessible full text available May 26, 2026
  3. Oja's algorithm for Streaming Principal Component Analysis (PCA) for n data-points in a d dimensional space achieves the same sin-squared error O(r𝖾𝖿𝖿/n) as the offline algorithm in O(d) space and O(nd) time and a single pass through the datapoints. Here r𝖾𝖿𝖿 is the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix Σ). Under this computational budget, we consider the problem of sparse PCA, where the principal eigenvector of Σ is s-sparse, and r𝖾𝖿𝖿 can be large. In this setting, to our knowledge, \textit{there are no known single-pass algorithms} that achieve the minimax error bound in O(d) space and O(nd) time without either requiring strong initialization conditions or assuming further structure (e.g., spiked) of the covariance matrix. We show that a simple single-pass procedure that thresholds the output of Oja's algorithm (the Oja vector) can achieve the minimax error bound under some regularity conditions in O(d) space and O(nd) time. We present a nontrivial and novel analysis of the entries of the unnormalized Oja vector, which involves the projection of a product of independent random matrices on a random initial vector. This is completely different from previous analyses of Oja's algorithm and matrix products, which have been done when the r𝖾𝖿𝖿 is bounded. 
    more » « less
    Free, publicly-accessible full text available March 11, 2026
  4. Posterior sampling with the spike-and-slab prior [MB88], a popular multimodal distribution used to model uncertainty in variable selection, is considered the theoretical gold standard method for Bayesian sparse linear regression [CPS09, Roc18]. However, designing provable algorithms for performing this sampling task is notoriously challenging. Existing posterior samplers for Bayesian sparse variable selection tasks either require strong assumptions about the signal-to-noise ratio (SNR) [YWJ16], only work when the measurement count grows at least linearly in the dimension [MW24], or rely on heuristic approximations to the posterior. We give the first provable algorithms for spike-and-slab posterior sampling that apply for any SNR, and use a measurement count sublinear in the problem dimension. Concretely, assume we are given a measurement matrix X∈ℝn×d and noisy observations y=Xθ⋆+ξ of a signal θ⋆ drawn from a spike-and-slab prior π with a Gaussian diffuse density and expected sparsity k, where ξ∼(𝟘n,σ2In). We give a polynomial-time high-accuracy sampler for the posterior π(⋅∣X,y), for any SNR σ−1 > 0, as long as n≥k3⋅polylog(d) and X is drawn from a matrix ensemble satisfying the restricted isometry property. We further give a sampler that runs in near-linear time ≈nd in the same setting, as long as n≥k5⋅polylog(d). To demonstrate the flexibility of our framework, we extend our result to spike-and-slab posterior sampling with Laplace diffuse densities, achieving similar guarantees when σ=O(1k) is bounded. 
    more » « less
    Free, publicly-accessible full text available March 4, 2026
  5. This study investigates the programmable strain sensing capability, auxetic behaviour, and failure modes of 3D-printed, self-monitoring auxetic lattices fabricated from in-house engineered polyetheretherketone (PEEK) reinforced with multi-walled carbon nanotubes (MWCNTs). A skeletally-parametrized geometric modelling framework, combining Voronoi tessellation with 2D wallpaper symmetries, is used to systematically explore a vast range of non-predetermined topologies beyond traditional lattice designs. A representative set of these architectures is realized via fused filament fabrication, and multiscale characterization—including macroscale tensile testing and microstructural analysis—demonstrates tuneable multifunctional performance as a function of MWCNT content and unit cell topology. Real-time resistance measurements track deformation, damage initiation, and progression, with the sensitivity factor increasing from below 1 in the elastic regime (strain sensitivity) to as high as 80 for PEEK/MWCNT at 6 wt.% under inelastic deformation (damage sensitivity). Implicit architecture-topology tailoring further allows fine-tuning of mechanical properties, achieving stiffness values ranging from 9 MPa to 63 MPa and negative Poisson’s ratios between –0.63 and –0.17 using ~3 wt.% MWCNT at a relative density of 25%. Furthermore, a novel piezoresistive finite element model, implemented in Abaqus via a user-defined subroutine, accurately captures the electromechanical response up to the onset of ligament failure, offering predictive capability. These results demonstrate how architecture-topology tuning can be leveraged to customise strain sensitivity and failure modes, enabling the development of multifunctional piezoresistive lattice composites for applications such as smart orthopaedic implants, aerospace skins, and impact-tolerant systems. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  6. Independent Component Analysis (ICA) was introduced in the 1980's as a model for Blind Source Separation (BSS), which refers to the process of recovering the sources underlying a mixture of signals, with little knowledge about the source signals or the mixing process. While there are many sophisticated algorithms for estimation, different methods have different shortcomings. In this paper, we develop a nonparametric score to adaptively pick the right algorithm for ICA with arbitrary Gaussian noise. The novelty of this score stems from the fact that it just assumes a finite second moment of the data and uses the characteristic function to evaluate the quality of the estimated mixing matrix without any knowledge of the parameters of the noise distribution. In addition, we propose some new contrast functions and algorithms that enjoy the same fast computability as existing algorithms like FASTICA and JADE but work in domains where the former may fail. While these also may have weaknesses, our proposed diagnostic, as shown by our simulations, can remedy them. Finally, we propose a theoretical framework to analyze the local and global convergence properties of our algorithms. 
    more » « less
    Free, publicly-accessible full text available November 5, 2025
  7. A clear experimental signature of the population of the lowest triplet state T 1 3 of the methane dication is identified in a photoionization experiment. This state is populated only in valence ionization and is absent when the dication is formed by core ionization followed by Auger-Meitner decay. For valence ionization, the total internal energy of the CH 3 + fragment, formed during the deprotonation of CH 4 2 + , is evaluated. Notably, the distribution of this internal energy peaks at the same value regardless of the initially populated electronic state of CH 4 2 + . We find that excited electronic states of CH 3 + are predominantly populated with significant rovibrational excitation. Published by the American Physical Society2025 
    more » « less
    Free, publicly-accessible full text available April 1, 2026
  8. The k-principal component analysis (k-PCA) problem is a fundamental algorithmic primitive that is widely-used in data analysis and dimensionality reduction applications. In statistical settings, the goal of k-PCA is to identify a top eigenspace of the covariance matrix of a distribution, which we only have black-box access to via samples. Motivated by these settings, we analyze black-box deflation methods as a framework for designing k-PCA algorithms, where we model access to the unknown target matrix via a black-box 1-PCA oracle which returns an approximate top eigenvector, under two popular notions of approximation. Despite being arguably the most natural reduction-based approach to k-PCA algorithm design, such black-box methods, which recursively call a 1-PCA oracle k times, were previously poorly-understood. Our main contribution is significantly sharper bounds on the approximation parameter degradation of deflation methods for k-PCA. For a quadratic form notion of approximation we term ePCA (energy PCA), we show deflation methods suffer no parameter loss. For an alternative well-studied approximation notion we term cPCA (correlation PCA), we tightly characterize the parameter regimes where deflation methods are feasible. Moreover, we show that in all feasible regimes, k-cPCA deflation algorithms suffer no asymptotic parameter loss for any constant k. We apply our framework to obtain state-of-the-art k-PCA algorithms robust to dataset contamination, improving prior work in sample complexity by a 𝗉𝗈𝗅𝗒(k) factor. 
    more » « less