skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On the maximal deviation of kernel regression estimators with NMAR response variables
This article focuses on the problem of kernel regression estimation in the presence of nonignorable incomplete data with particular focus on the limiting distribution of the maximal deviation of the proposed estimators. From an applied point of view, such a limiting distribution enables one to construct asymptotically correct uniform bands, or perform tests of hypotheses, for a regression curve when the available data set suffers from missing (not necessarily at random) response values. Furthermore, such asymptotic results have always been of theoretical interest in mathematical statistics. We also present some numerical results that further confirm and complement the theoretical developments of this paper.  more » « less
Award ID(s):
1916161
PAR ID:
10342591
Author(s) / Creator(s):
Date Published:
Journal Name:
Statistical Papers
ISSN:
0932-5026
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The weighted nearest neighbors (WNN) estimator has been popularly used as a flexible and easy-to-implement nonparametric tool for mean regression estimation. The bagging technique is an elegant way to form WNN estimators with weights automatically generated to the nearest neighbors (Steele, 2009; Biau et al., 2010); we name the resulting estimator as the distributional nearest neighbors (DNN) for easy reference. Yet, there is a lack of distributional results for such estimator, limiting its application to statistical inference. Moreover, when the mean regression function has higher-order smoothness, DNN does not achieve the optimal nonparametric convergence rate, mainly because of the bias issue. In this work, we provide an in-depth technical analysis of the DNN, based on which we suggest a bias reduction approach for the DNN estimator by linearly combining two DNN estimators with different subsampling scales, resulting in the novel two-scale DNN (TDNN) estimator. The two-scale DNN estimator has an equivalent representation of WNN with weights admitting explicit forms and some being negative. We prove that, thanks to the use of negative weights, the two-scale DNN estimator enjoys the optimal nonparametric rate of convergence in estimating the regression function under the fourth order smoothness condition. We further go beyond estimation and establish that the DNN and two-scale DNN are both asymptotically normal as the subsampling scales and sample size diverge to infinity. For the practical implementation, we also provide variance estimators and a distribution estimator using the jackknife and bootstrap techniques for the two-scale DNN. These estimators can be exploited for constructing valid confidence intervals for nonparametric inference of the regression function. The theoretical results and appealing nite-sample performance of the suggested two-scale DNN method are illustrated with several simulation examples and a real data application. 
    more » « less
  2. Aggregating person-level data across multiple clinical study sites is often constrained by privacy regulations, necessitating the development of decentralized modeling approaches in biomedical research. To address this requirement, a federated nonlinear regression algorithm based on the Choquet integral has been introduced for outcome prediction. This approach avoids reliance on prior statistical assumptions about data distribution and captures feature interactions, reflecting the non-additive nature of biomedical data characteristics. This work represents the first theoretical application of Choquet integral regression to multisite longitudinal trial data within a federated learning framework. The Multiple Imputation Choquet Integral Regression with LASSO (MIChoquet-LASSO) algorithm is specifically designed to reduce overfitting and enable variable selection in federated learning settings. Its performance has been evaluated using synthetic datasets, publicly available biomedical datasets, and proprietary longitudinal randomized controlled trial data. Comparative evaluations were conducted against benchmark methods, including ordinary least squares (OLS) regression and Choquet-OLS regression, under various scenarios such as model misspecification and both linear and nonlinear data structures in non-federated and federated contexts. Mean squared error was used as the primary performance metric. Results indicate that MIChoquet-LASSO outperforms compared models in handling nonlinear longitudinal data with missing values, particularly in scenarios prone to overfitting. In federated settings, Choquet-OLS underperforms, whereas the federated variant of the model, FEDMIChoquet-LASSO, demonstrates consistently better performance. These findings suggest that FEDMIChoquet-LASSO offers a reliable solution for outcome prediction in multisite longitudinal trials, addressing challenges such as missing values, nonlinear relationships, and privacy constraints while maintaining strong performance within the federated learning framework. 
    more » « less
  3. Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers. However, there may not exist a good, simple model for the distribution, so we seek to find a small subset where there exists such a model. We give a computationally efficient algorithm with theoretical analysis for the conditional linear regression task, which is the joint task of identifying a significant portion of the data distribution, described by a k-DNF, along with a linear predictor on that portion with a small loss. In contrast to work in robust statistics on small subsets, our loss bounds do not feature a dependence on the density of the portion we fit, and compared to previous work on conditional linear regression, our algorithm’s running time scales polynomially with the sparsity of the linear predictor. We also demonstrate empirically that our algorithm can leverage this advantage to obtain a k-DNF with a better linear predictor in practice. 
    more » « less
  4. The assessment of regression models with discrete outcomes is challenging and has many fundamental issues. With discrete outcomes, standard regression model assessment tools such as Pearson and deviance residuals do not follow the conventional reference distribution (normal) under the true model, calling into question the legitimacy of model assessment based on these tools. To fill this gap, we construct a new type of residuals for regression models with general discrete outcomes, including ordinal and count outcomes. The proposed residuals are based on two layers of probability integral transformation. When at least one continuous covariate is available, the proposed residuals closely follow a uniform distribution (or a normal distribution after transformation) under the correctly specified model. One can construct visualizations such as QQ plots to check the overall fit of a model straightforwardly, and the shape of QQ plots can further help identify possible causes of misspecification such as overdispersion. We provide theoretical justification for the proposed residuals by establishing their asymptotic properties. Moreover, in order to assess the mean structure and identify potential covariates, we develop an ordered curve as a supplementary tool, which is based on the comparison between the partial sum of outcomes and of fitted means. Through simulation, we demonstrate empirically that the proposed tools outperform commonly used residuals for various model assessment tasks. We also illustrate the workflow of model assessment using the proposed tools in data analysis. Supplementary materials for this article are available online. 
    more » « less
  5. Domain adaptation addresses the challenge where the distribution of target inference data differs from that of the source training data. Recently, data privacy has become a significant constraint, limiting access to the source domain. To mitigate this issue, Source-Free Domain Adaptation (SFDA) methods bypass source domain data by generating source-like data or pseudo-labeling the unlabeled target domain. However, these approaches often lack theoretical grounding. In this work, we provide a theoretical analysis of the SFDA problem, focusing on the general empirical risk of the unlabeled target domain. Our analysis offers a comprehensive understanding of how representativeness, generalization, and variety contribute to controlling the upper bound of target domain empirical risk in SFDA settings. We further explore how to balance this trade-off from three perspectives: sample selection, semantic domain alignment, and a progressive learning framework. These insights inform the design of novel algorithms. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on three benchmark datasets--Office-Home, DomainNet, and VisDA-C--yielding relative improvements of 3.2%, 9.1%, and 7.5%, respectively, over the representative SFDA method, SHOT. 
    more » « less