Multiple testing (MT) with false discovery rate (FDR) control has been widely conducted in the “discrete paradigm” where
Many methods for estimation or control of the false discovery rate (FDR) can be improved by incorporating information about π0, the proportion of all tested null hypotheses that are true. Estimates of π0 are often based on the number of p-values that exceed a threshold λ. We first give a finite sample proof for conservative point estimation of the FDR when the λ-parameter is fixed. Then we establish a condition under which a dynamic adaptive procedure, whose λ-parameter is determined by data, will lead to conservative π0- and FDR estimators. We also present asymptotic results on simultaneous conservative FDR estimation and control for a class of dynamic adaptive procedures. Simulation results show that a novel dynamic adaptive procedure achieves more power through smaller estimation errors for π0 under independence and mild dependence conditions. We conclude by discussing the connection between estimation and control of the FDR and show that several recently developed FDR control procedures can be cast in a unifying framework where the strength of the procedures can be easily evaluated.
- Publication Date:
- NSF-PAR ID:
- 10401138
- Journal Name:
- Journal of the Royal Statistical Society Series B: Statistical Methodology
- Volume:
- 74
- Issue:
- 1
- Page Range or eLocation-ID:
- p. 163-182
- ISSN:
- 1369-7412
- Publisher:
- Oxford University Press
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract p ‐values have discrete and heterogeneous null distributions. However, in this scenario existing FDR procedures often lose some power and may yield unreliable inference, and for this scenario there does not seem to be an FDR procedure that partitions hypotheses into groups, employs data‐adaptive weights and is nonasymptotically conservative. We propose a weightedp ‐value‐based FDR procedure, “weighted FDR (wFDR) procedure” for short, for MT in the discrete paradigm that efficiently adapts to both heterogeneity and discreteness ofp ‐value distributions. We theoretically justify the nonasymptotic conservativeness of the wFDR procedure under independence, and show via simulation studies that, for MT based onp ‐values of binomial test or Fisher's exact test, it is more powerful than six other procedures. The wFDR procedure is applied to two examples based on discrete data, a drug safety study, and a differential methylation study, where it makes more discoveries than two existing methods. -
Summary In multiple-testing problems, where a large number of hypotheses are tested simultaneously, false discovery rate (FDR) control can be achieved with the well-known Benjamini–Hochberg procedure, which a(0, 1]dapts to the amount of signal in the data, under certain distributional assumptions. Many modifications of this procedure have been proposed to improve power in scenarios where the hypotheses are organized into groups or into a hierarchy, as well as other structured settings. Here we introduce the ‘structure-adaptive Benjamini–Hochberg algorithm’ (SABHA) as a generalization of these adaptive testing methods. The SABHA method incorporates prior information about any predetermined type of structure in the pattern of locations of the signals and nulls within the list of hypotheses, to reweight the p-values in a data-adaptive way. This raises the power by making more discoveries in regions where signals appear to be more common. Our main theoretical result proves that the SABHA method controls the FDR at a level that is at most slightly higher than the target FDR level, as long as the adaptive weights are constrained sufficiently so as not to overfit too much to the data—interestingly, the excess FDR can be related to the Rademacher complexity or Gaussian width of the classmore »
-
Covariate‐adaptive randomization (CAR) procedures have been developed in clinical trials to mitigate the imbalance of treatments among covariates. In recent years, an increasing number of trials have started to use CAR for the advantages in statistical efficiency and enhancing credibility. At the same time, sample size re‐estimation (SSR) has become a common technique in industry to reduce time and cost while maintaining a good probability of success. Despite the widespread popularity of combining CAR designs with SSR, few researchers have investigated this combination theoretically. More importantly, the existing statistical inference must be adjusted to protect the desired type I error rate when a model that omits some covariates is used. In this article, we give a framework for the application of SSR in CAR trials and study the underlying theoretical properties. We give the adjusted test statistic and derive the sample size calculation formula under the CAR setting. We can tackle the difficulties caused by the adaptive features in CAR and prove the asymptotic independence between stages. Numerical studies are conducted under multiple parameter settings and scenarios that are commonly encountered in practice. The results show that all advantages of CAR and SSR can be preserved and further improved inmore »
-
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis [1]. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communicationmore »
-
Abstract This study presents a particle filter based framework to track cardiac surface from a time sequence of single magnetic resonance imaging (MRI) slices with the future goal of utilizing the presented framework for interventional cardiovascular magnetic resonance procedures, which rely on the accurate and online tracking of the cardiac surface from MRI data. The framework exploits a low-order parametric deformable model of the cardiac surface. A stochastic dynamic system represents the cardiac surface motion. Deformable models are employed to introduce shape prior to control the degree of the deformations. Adaptive filters are used to model complex cardiac motion in the dynamic model of the system. Particle filters are utilized to recursively estimate the current state of the system over time. The proposed method is applied to recover biventricular deformations and validated with a numerical phantom and multiple real cardiac MRI datasets. The algorithm is evaluated with multiple experiments using fixed and varying image slice planes at each time step. For the real cardiac MRI datasets, the average root-mean-square tracking errors of 2.61 mm and 3.42 mm are reported respectively for the fixed and varying image slice planes. This work serves as a proof-of-concept study for modeling and tracking the cardiac surfacemore »