skip to main content


Title: A Bias Bound Approach to Non-parametric Inference
Abstract The traditional approach to obtain valid confidence intervals for non-parametric quantities is to select a smoothing parameter such that the bias of the estimator is negligible relative to its standard deviation. While this approach is apparently simple, it has two drawbacks: first, the question of optimal bandwidth selection is no longer well-defined, as it is not clear what ratio of bias to standard deviation should be considered negligible. Second, since the bandwidth choice necessarily deviates from the optimal (mean squares-minimizing) bandwidth, such a confidence interval is very inefficient. To address these issues, we construct valid confidence intervals that account for the presence of a non-negligible bias and thus make it possible to perform inference with optimal mean squared error minimizing bandwidths. The key difficulty in achieving this involves finding a strict, yet feasible, bound on the bias of a non-parametric estimator. It is well-known that it is not possible to consistently estimate the pointwise bias of an optimal non-parametric estimator (for otherwise, one could subtract it and obtain a faster convergence rate violating Stone’s bounds on the optimal convergence rates). Nevertheless, we find that, under minimal primitive assumptions, it is possible to consistently estimate an upper bound on the magnitude of the bias, which is sufficient to deliver a valid confidence interval whose length decreases at the optimal rate and which does not contradict Stone’s results.  more » « less
Award ID(s):
1659334
NSF-PAR ID:
10188182
Author(s) / Creator(s):
Date Published:
Journal Name:
The Review of Economic Studies
ISSN:
0034-6527
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The weighted nearest neighbors (WNN) estimator has been popularly used as a flexible and easy-to-implement nonparametric tool for mean regression estimation. The bagging technique is an elegant way to form WNN estimators with weights automatically generated to the nearest neighbors (Steele, 2009; Biau et al., 2010); we name the resulting estimator as the distributional nearest neighbors (DNN) for easy reference. Yet, there is a lack of distributional results for such estimator, limiting its application to statistical inference. Moreover, when the mean regression function has higher-order smoothness, DNN does not achieve the optimal nonparametric convergence rate, mainly because of the bias issue. In this work, we provide an in-depth technical analysis of the DNN, based on which we suggest a bias reduction approach for the DNN estimator by linearly combining two DNN estimators with different subsampling scales, resulting in the novel two-scale DNN (TDNN) estimator. The two-scale DNN estimator has an equivalent representation of WNN with weights admitting explicit forms and some being negative. We prove that, thanks to the use of negative weights, the two-scale DNN estimator enjoys the optimal nonparametric rate of convergence in estimating the regression function under the fourth order smoothness condition. We further go beyond estimation and establish that the DNN and two-scale DNN are both asymptotically normal as the subsampling scales and sample size diverge to infinity. For the practical implementation, we also provide variance estimators and a distribution estimator using the jackknife and bootstrap techniques for the two-scale DNN. These estimators can be exploited for constructing valid confidence intervals for nonparametric inference of the regression function. The theoretical results and appealing nite-sample performance of the suggested two-scale DNN method are illustrated with several simulation examples and a real data application. 
    more » « less
  2. Summary

    We derive non-parametric confidence intervals for the eigenvalues of the Hessian at modes of a density estimate. This provides information about the strength and shape of modes and can also be used as a significance test. We use a data-splitting approach in which potential modes are identified by using the first half of the data and inference is done with the second half of the data. To obtain valid confidence sets for the eigenvalues, we use a bootstrap based on an elementary symmetric polynomial transformation. This leads to valid bootstrap confidence sets regardless of any multiplicities in the eigenvalues. We also suggest a new method for bandwidth selection, namely choosing the bandwidth to maximize the number of significant modes. We show by example that this method works well. Even when the true distribution is singular, and hence does not have a density (in which case cross-validation chooses a zero bandwidth), our method chooses a reasonable bandwidth.

     
    more » « less
  3. Summary

    We focus on selecting optimal bandwidths for non-parametric estimation of the two-point correlation function of a point pattern. We obtain these optimal bandwidths by using a bootstrap approach to select a bandwidth that minimizes the integrated squared error. The variance term is estimated by using a non-parametric spatial bootstrap, whereas the bias term is estimated with a plug-in approach using a pilot estimator of the two-point correlation function based on a parametric model. The choice of parametric model for the pilot estimator is very flexible. Depending on applications, parametric statistical point models, physical models or functional models can be used. We also explore the use of the procedure for selecting adaptive optimal bandwidths. We investigate the performance of the bandwidth selection procedure by using a simulation study. In our data example, we apply our method to a Sloan Digital Sky Survey galaxy cluster catalogue by using a pilot estimator based on the power law functional model in cosmology. The resulting non-parametric two-point correlation function estimate is then used to estimate a cosmological mass bias parameter that describes the relationship between the galaxy mass distribution and the underlying matter distribution.

     
    more » « less
  4. Abstract

    In many scientific experiments, multiarmed bandits are used as an adaptive data collection method. However, this adaptive process can lead to a dependence that renders many commonly used statistical inference methods invalid. An example of this is the sample mean, which is a natural estimator of the mean parameter but can be biased. This can cause test statistics based on this estimator to have an inflated type I error rate, and the resulting confidence intervals may have significantly lower coverage probabilities than their nominal values. To address this issue, we propose an alternative approach called randomized multiarm bandits (rMAB). This combines a randomization step with a chosen MAB algorithm, and by selecting the randomization probability appropriately, optimal regret can be achieved asymptotically. Numerical evidence shows that the bias of the sample mean based on the rMAB is much smaller than that of other methods. The test statistic and confidence interval produced by this method also perform much better than its competitors.

     
    more » « less
  5. We consider the problem of constructing asymptotically valid confidence intervals for the change point in a high-dimensional covariance shift setting. A novel estimator for the change point parameter is developed, and its asymptotic distribution under high dimen- sional scaling obtained. We establish that the proposed estimator exhibits a sharp Op(ψ−2) rate of convergence, wherein ψ represents the jump size between model parameters before and after the change point. Further, the form of the asymptotic distributions under both a vanishing and a non-vanishing regime of the jump size are characterized. In the former case, it corresponds to the argmax of an asymmetric Brownian motion, while in the latter case to the argmax of an asymmetric random walk. We then obtain the relationship be- tween these distributions, which allows construction of regime (vanishing vs non-vanishing) adaptive confidence intervals. Easy to implement algorithms for the proposed methodology are developed and their performance illustrated on synthetic and real data sets. 
    more » « less