skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: Forward selection and post-selection inference in factorial designs
Award ID(s):
1945136 2239047
PAR ID:
10625614
Author(s) / Creator(s):
; ;
Publisher / Repository:
IMS
Date Published:
Journal Name:
The Annals of Statistics
Volume:
53
Issue:
2
ISSN:
0090-5364
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this “selection via proxy” (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10 faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1:6 end-to-end training time improvement. 
    more » « less
  2. Density-dependent selection, which promotes contrasting patterns of trait means at different population densities, has a long history in population genetics and ecology. The unifying principle from theory is that density-dependent selection operates on phenotypic traits whose values counter the effects of whatever ecological agent is limiting population growth, be it resource competition, predators, or pathogens. However, the complexity inherent in density dependence means that the same selective process can generate multiple outcomes, depending upon the details of how population density affects vital rates and the age or size structure of a population. Failure to appreciate the potential for multiple outcomes confounded many early studies of the process. Nonetheless, careful empirical work in laboratory studies, long-term field studies, and studies of sexual selection demonstrates the wide reach of density-dependent selection. The inconsistent outcomes observed in these studies call for renewed research into how the details of density dependence channel adaptive responses. 
    more » « less