In sparse linear regression, the SLOPE estimator generalizes LASSO by assigning magnitude-dependent regular- izations to different coordinates of the estimate. In this paper, we present an asymptotically exact characterization of the performance of SLOPE in the high-dimensional regime where the number of unknown parameters grows in proportion to the number of observations. Our asymptotic characterization enables us to derive optimal regularization sequences to either minimize the MSE or to maximize the power in variable selection under any given level of Type-I error. In both cases, we show that the optimal design can be recast as certain infinite-dimensional convex optimization problems, which have efficient and accurate finite-dimensional approximations. Numerical simulations verify our asymptotic predictions. They also demonstrate the superi- ority of our optimal design over LASSO and a regularization sequence previously proposed in the literature.
more »
« less
Adaptive nonparametric regression with the K-nearest neighbour fused lasso
Summary The fused lasso, also known as total-variation denoising, is a locally adaptive function estimator over a regular grid of design points. In this article, we extend the fused lasso to settings in which the points do not occur on a regular grid, leading to a method for nonparametric regression. This approach, which we call the $$K$$-nearest-neighbours fused lasso, involves computing the $$K$$-nearest-neighbours graph of the design points and then performing the fused lasso over this graph. We show that this procedure has a number of theoretical advantages over competing methods: specifically, it inherits local adaptivity from its connection to the fused lasso, and it inherits manifold adaptivity from its connection to the $$K$$-nearest-neighbours approach. In a simulation study and an application to flu data, we show that excellent results are obtained. For completeness, we also study an estimator that makes use of an $$\epsilon$$-graph rather than a $$K$$-nearest-neighbours graph and contrast it with the $$K$$-nearest-neighbours fused lasso.
more »
« less
- Award ID(s):
- 1712996
- PAR ID:
- 10383980
- Date Published:
- Journal Name:
- Biometrika
- Volume:
- 107
- Issue:
- 2
- ISSN:
- 0006-3444
- Page Range / eLocation ID:
- 293 to 310
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We introduce chroma subsampling for 3D point cloud attribute compression by proposing a novel technique to sample points irregularly placed in 3D space. While most current video compression standards use chroma subsampling, these chroma subsampling methods cannot be directly applied to 3D point clouds, given their irregularity and sparsity. In this work, we develop a framework to incorporate chroma subsampling into geometry-based point cloud encoders, such as region adaptive hierarchical transform (RAHT) and region adaptive graph Fourier transform (RAGFT). We propose different sampling patterns on a regular 3D grid to sample the points at different rates. We use a simple graph-based nearest neighbor interpolation technique to reconstruct the full resolution point cloud at the decoder end. Experimental results demonstrate that our proposed method provides significant coding gains with negligible impact on the reconstruction quality. For some sequences, we observe a bitrate reduction of 10-15% under the Bjontegaard metric. More generally, perceptual masking makes it possible to achieve larger bitrate reductions without visible changes in quality.more » « less
-
null (Ed.)We consider the problem of inferring the conditional independence graph (CIG) of a high-dimensional stationary multivariate Gaussian time series. A p-variate Gaussian time series graphical model associated with an undirected graph with p vertices is defined as the family of time series that obey the conditional independence restrictions implied by the edge set of the graph. A sparse-group lasso-based frequency-domain formulation of the problem has been considered in the literature where the objective is to estimate the inverse power spectral density (PSD) of the data via optimization of a sparse-group lasso based penalized log-likelihood cost function that is formulated in the frequency-domain. The CIG is then inferred from the estimated inverse PSD. In this paper we establish sufficient conditions for consistency of the inverse PSD estimator resulting from the sparse-group graphical lasso-based approach.more » « less
-
Entanglement is one of the physical properties of quantum systems responsible for the computational hardness of simulating quantum systems. But while the runtime of specific algorithms, notably tensor network algorithms, explicitly depends on the amount of entanglement in the system, it is unknown whether this connection runs deeper and entanglement can also cause inherent, algorithm-independent complexity. In this Letter, we quantitatively connect the entanglement present in certain quantum systems to the computational complexity of simulating those systems. Moreover, we completely characterize the entanglement and complexity as a function of a system parameter. Specifically, we consider the task of simulating single-qubit measurements of k-regular graph states on n qubits. We show that, as the regularity parameter is increased from 1 to n−1, there is a sharp transition from an easy regime with low entanglement to a hard regime with high entanglement at k = 3, and a transition back to easy and low entanglement at k = n−3. As a key technical result, we prove a duality for the simulation complexity of regular graph states between low and high regularity.more » « less
-
Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z. It can be used to quantify conditional dependence among variables in many data-driven inference problems such as graphical models, causal learning, feature selection and time-series analysis. While k-nearest neighbor (kNN) based estimators as well as kernel-based methods have been widely used for CMI estimation, they suffer severely from the curse of dimensionality. In this paper, we leverage advances in classifiers and generative models to design methods for CMI estimation. Specifically, we introduce an estimator for KL-Divergence based on the likelihood ratio by training a classifier to distinguish the observed joint distribution from the product distribution. We then show how to construct several CMI estimators using this basic divergence estimator by drawing ideas from conditional generative models. We demonstrate that the estimates from our proposed approaches do not degrade in performance with increasing dimension and obtain significant improvement over the widely used KSG estimator. Finally, as an application of accurate CMI estimation, we use our best estimator for conditional independence testing and achieve superior performance than the state-of-the-art tester on both simulated and real data-sets.more » « less
An official website of the United States government

