Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood. We study the case of an ensemble of linear predictors, where each individual predictor is fi t using ordinary least squares on a random submatrix of the data matrix. We show that, under standard Gaussianity assumptions, when the number of features selected for each predictor is optimally tuned, the asymptotic risk of a large ensemble is equal to the asymptotic ridge regression risk, which is known to be optimal among linear predictors in this setting. In addition to eliciting this implicit regularization that results from subsampling, we also connect this ensemble to the dropout technique used in training deep (neural) networks, another strategy that has been shown to have a ridge-like regularizing effect.
more »
« less
Structured Ordinary Least Squares: A Sufficient Dimension Reduction approach for regressions with partitioned predictors and heterogeneous units: Structured Ordinary Least Squares
- Award ID(s):
- 1713078
- PAR ID:
- 10066500
- Date Published:
- Journal Name:
- Biometrics
- Volume:
- 73
- Issue:
- 2
- ISSN:
- 0006-341X
- Page Range / eLocation ID:
- 529 to 539
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
An official website of the United States government

