skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.

This content will become publicly available on January 1, 2024

Title: Bonferroni-Free and Indifference-Zone-Flexible Sequential Elimination Procedures for Ranking and Selection
This paper proposes two fully sequential procedures for selecting the best system with a guaranteed probability of correct selection (PCS). The main features of the proposed procedures include the following: (1) adopting a Bonferroni-free model that overcomes the conservativeness of the Bonferroni correction and delivers the exact probabilistic guarantee without overshooting; (2) conducting always valid and fully sequential hypothesis tests that enable continuous monitoring of each candidate system and control the type I error rate (or equivalently, PCS) at a prescribed level; and (3) assuming an indifference-zone-flexible formulation, which means that the indifference-zone parameter is not indispensable but could be helpful if provided. We establish statistical validity and asymptotic efficiency for the proposed procedures under normality settings with and without the knowledge of true variances. Numerical studies conducted under various configurations corroborate the theoretical findings and demonstrate the superiority of the proposed procedures. Funding: W. Wang and H. Wan were supported in part by CollinStar Capital Pty Ltd. X. Chen was supported in part by the National Science Foundation [Grant IIS-1849300 and CAREER CMMI-1846663]. Supplemental Material: The online appendix is available at .  more » « less
Award ID(s):
1846663 1849300
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Operations Research
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider a simulation-based ranking and selection (R&S) problem with input uncertainty, in which unknown input distributions can be estimated using input data arriving in batches of varying sizes over time. Each time a batch arrives, additional simulations can be run using updated input distribution estimates. The goal is to confidently identify the best design after collecting as few batches as possible. We first introduce a moving average estimator for aggregating simulation outputs generated under heterogenous input distributions. Then, based on a sequential elimination framework, we devise two major R&S procedures by establishing exact and asymptotic confidence bands for the estimator. We also extend our procedures to the indifference zone setting, which helps save simulation effort for practical usage. Numerical results show the effectiveness and necessity of our procedures in controlling error from input uncertainty. Moreover, the efficiency can be further boosted through optimizing the “drop rate” parameter, which is the proportion of past simulation outputs to discard, of the moving average estimator. 
    more » « less
  2. Abstract

    In this paper, we propose a new framework to construct confidence sets for a $d$-dimensional unknown sparse parameter ${\boldsymbol \theta }$ under the normal mean model ${\boldsymbol X}\sim N({\boldsymbol \theta },\sigma ^{2}\bf{I})$. A key feature of the proposed confidence set is its capability to account for the sparsity of ${\boldsymbol \theta }$, thus named as sparse confidence set. This is in sharp contrast with the classical methods, such as the Bonferroni confidence intervals and other resampling-based procedures, where the sparsity of ${\boldsymbol \theta }$ is often ignored. Specifically, we require the desired sparse confidence set to satisfy the following two conditions: (i) uniformly over the parameter space, the coverage probability for ${\boldsymbol \theta }$ is above a pre-specified level; (ii) there exists a random subset $S$ of $\{1,...,d\}$ such that $S$ guarantees the pre-specified true negative rate for detecting non-zero $\theta _{j}$’s. To exploit the sparsity of ${\boldsymbol \theta }$, we allow the confidence interval for $\theta _{j}$ to degenerate to a single point 0 for any $j\notin S$. Under this new framework, we first consider whether there exist sparse confidence sets that satisfy the above two conditions. To address this question, we establish a non-asymptotic minimax lower bound for the non-coverage probability over a suitable class of sparse confidence sets. The lower bound deciphers the role of sparsity and minimum signal-to-noise ratio (SNR) in the construction of sparse confidence sets. Furthermore, under suitable conditions on the SNR, a two-stage procedure is proposed to construct a sparse confidence set. To evaluate the optimality, the proposed sparse confidence set is shown to attain a minimax lower bound of some properly defined risk function up to a constant factor. Finally, we develop an adaptive procedure to the unknown sparsity. Numerical studies are conducted to verify the theoretical results.

    more » « less
  3. Site description. This data package consists of data obtained from sampling surface soil (the 0-7.6 cm depth profile) in black mangrove (Avicennia germinans) dominated forest and black needlerush (Juncus roemerianus) saltmarsh along the Gulf of Mexico coastline in peninsular west-central Florida, USA. This location has a subtropical climate with mean daily temperatures ranging from 15.4 °C in January to 27.8 °C in August, and annual precipitation of 1336 mm. Precipitation falls as rain primarily between June and September. Tides are semi-diurnal, with 0.57 m median amplitudes during the year preceding sampling (U.S. NOAA National Ocean Service, Clearwater Beach, Florida, station 8726724). Sea-level rise is 4.0 ± 0.6 mm per year (1973-2020 trend, mean ± 95 % confidence interval, NOAA NOS Clearwater Beach station). The A. germinans mangrove zone is either adjacent to water or fringed on the seaward side by a narrow band of red mangrove (Rhizophora mangle). A near-monoculture of J. roemerianus is often adjacent to and immediately landward of the A. germinans zone. The transition from the mangrove to the J. roemerianus zone is variable in our study area. An abrupt edge between closed-canopy mangrove and J. roemerianus monoculture may extend for up to several hundred meters in some locations, while other stretches of ecotone present a gradual transition where smaller, widely spaced trees are interspersed into the herbaceous marsh. Juncus roemerianus then extends landward to a high marsh patchwork of succulent halophytes (including Salicornia bigellovi, Sesuvium sp., and Batis maritima), scattered dwarf mangrove, and salt pans, followed in turn by upland vegetation that includes Pinus sp. and Serenoa repens. Field design and sample collection. We established three study sites spaced at approximately 5 km intervals along the western coastline of the central Florida peninsula. The sites consisted of the Salt Springs (28.3298°, -82.7274°), Energy Marine Center (28.2903°, -82.7278°), and Green Key (28.2530°, -82.7496°) sites on the Gulf of Mexico coastline in Pasco County, Florida, USA. At each site, we established three plot pairs, each consisting of one saltmarsh plot and one mangrove plot. Plots were 50 m^2 in size. Plots pairs within a site were separated by 230-1070 m, and the mangrove and saltmarsh plots composing a pair were 70-170 m apart. All plot pairs consisted of directly adjacent patches of mangrove forest and J. roemerianus saltmarsh, with the mangrove forests exhibiting a closed canopy and a tree architecture (height 4-6 m, crown width 1.5-3 m). Mangrove plots were located at approximately the midpoint between the seaward edge (water-mangrove interface) and landward edge (mangrove-marsh interface) of the mangrove zone. Saltmarsh plots were located 20-25 m away from any mangrove trees and into the J. roemerianus zone (i.e., landward from the mangrove-marsh interface). Plot pairs were coarsely similar in geomorphic setting, as all were located on the Gulf of Mexico coastline, rather than within major sheltering formations like Tampa Bay, and all plot pairs fit the tide-dominated domain of the Woodroffe classification (Woodroffe, 2002, "Coasts: Form, Process and Evolution", Cambridge University Press), given their conspicuous semi-diurnal tides. There was nevertheless some geomorphic variation, as some plot pairs were directly open to the Gulf of Mexico while others sat behind keys and spits or along small tidal creeks. Our use of a plot-pair approach is intended to control for this geomorphic variation. Plot center elevations (cm above mean sea level, NAVD 88) were estimated by overlaying the plot locations determined with a global positioning system (Garmin GPS 60, Olathe, KS, USA) on a LiDAR-derived bare-earth digital elevation model (Dewberry, Inc., 2019). The digital elevation model had a vertical accuracy of ± 10 cm (95 % CI) and a horizontal accuracy of ± 116 cm (95 % CI). Soil samples were collected via coring at low tide in June 2011. From each plot, we collected a composite soil sample consisting of three discrete 5.1 cm diameter soil cores taken at equidistant points to 7.6 cm depth. Cores were taken by tapping a sleeve into the soil until its top was flush with the soil surface, sliding a hand under the core, and lifting it up. Cores were then capped and transferred on ice to our laboratory at the University of South Florida (Tampa, Florida, USA), where they were combined in plastic zipper bags, and homogenized by hand into plot-level composite samples on the day they were collected. A damp soil subsample was immediately taken from each composite sample to initiate 1 y incubations for determination of active C and N (see below). The remainder of each composite sample was then placed in a drying oven (60 °C) for 1 week with frequent mixing of the soil to prevent aggregation and liberate water. Organic wetland soils are sometimes dried at 70 °C, however high drying temperatures can volatilize non-water liquids and oxidize and decompose organic matter, so 50 °C is also a common drying temperature for organic soils (Gardner 1986, "Methods of Soil Analysis: Part 1", Soil Science Society of America); we accordingly chose 60 °C as a compromise between sufficient water removal and avoidance of non-water mass loss. Bulk density was determined as soil dry mass per core volume (adding back the dry mass equivalent of the damp subsample removed prior to drying). Dried subsamples were obtained for determination of soil organic matter (SOM), mineral texture composition, and extractable and total carbon (C) and nitrogen (N) within the following week. Sample analyses. A dried subsample was apportioned from each composite sample to determine SOM as mass loss on ignition at 550 °C for 4 h. After organic matter was removed from soil via ignition, mineral particle size composition was determined using a combination of wet sieving and density separation in 49 mM (3 %) sodium hexametaphosphate ((NaPO_3)_6) following procedures in Kettler et al. (2001, Soil Science Society of America Journal 65, 849-852). The percentage of dry soil mass composed of silt and clay particles (hereafter, fines) was calculated as the mass lost from dispersed mineral soil after sieving (0.053 mm mesh sieve). Fines could have been slightly underestimated if any clay particles were burned off during the preceding ignition of soil. An additional subsample was taken from each composite sample to determine extractable N and organic C concentrations via 0.5 M potassium sulfate (K_2SO_4) extractions. We combined soil and extractant (ratio of 1 g dry soil:5 mL extractant) in plastic bottles, reciprocally shook the slurry for 1 h at 120 rpm, and then gravity filtered it through Fisher G6 (1.6 μm pore size) glass fiber filters, followed by colorimetric detection of nitrite (NO_2^-) + nitrate (NO_3^-) and ammonium (NH_4^+) in the filtrate (Hood Nowotny et al., 2010,Soil Science Society of America Journal 74, 1018-1027) using a microplate spectrophotometer (Biotek Epoch, Winooski, VT, USA). Filtrate was also analyzed for dissolved organic C (referred to hereafter as extractable organic C) and total dissolved N via combustion and oxidation followed by detection of the evolved CO_2 and N oxide gases on a Formacs HT TOC/TN analyzer (Skalar, Breda, The Netherlands). Extractable organic N was then computed as total dissolved N in filtrate minus extractable mineral N (itself the sum of extractable NH_4-N and NO_2-N + NO_3-N). We determined soil total C and N from dried, milled subsamples subjected to elemental analysis (ECS 4010, Costech, Inc., Valencia, CA, USA) at the University of South Florida Stable Isotope Laboratory. Median concentration of inorganic C in unvegetated surface soil at our sites is 0.5 % of soil mass (Anderson, 2019, Univ. of South Florida M.S. thesis via methods in Wang et al., 2011, Environmental Monitoring and Assessment 174, 241-257). Inorganic C concentrations are likely even lower in our samples from under vegetation, where organic matter would dilute the contribution of inorganic C to soil mass. Nevertheless, the presence of a small inorganic C pool in our soils may be counted in the total C values we report. Extractable organic C is necessarily of organic C origin given the method (sparging with HCl) used in detection. Active C and N represent the fractions of organic C and N that are mineralizable by soil microorganisms under aerobic conditions in long-term soil incubations. To quantify active C and N, 60 g of field-moist soil were apportioned from each composite sample, placed in a filtration apparatus, and incubated in the dark at 25 °C and field capacity moisture for 365 d (as in Lewis et al., 2014, Ecosphere 5, art59). Moisture levels were maintained by frequently weighing incubated soil and wetting them up to target mass. Daily CO_2 flux was quantified on 29 occasions at 0.5-3 week intervals during the incubation period (with shorter intervals earlier in the incubation), and these per day flux rates were integrated over the 365 d period to compute an estimate of active C. Observations of per day flux were made by sealing samples overnight in airtight chambers fitted with septa and quantifying headspace CO_2 accumulation by injecting headspace samples (obtained through the septa via needle and syringe) into an infrared gas analyzer (PP Systems EGM 4, Amesbury, MA, USA). To estimate active N, each incubated sample was leached with a C and N free, 35 psu solution containing micronutrients (Nadelhoffer, 1990, Soil Science Society of America Journal 54, 411-415) on 19 occasions at increasing 1-6 week intervals during the 365 d incubation, and then extracted in 0.5 M K_2SO_4 at the end of the incubation in order to remove any residual mineral N. Active N was then quantified as the total mass of mineral N leached and extracted. Mineral N in leached and extracted solutions was detected as NH_4-N and NO_2-N + NO_3-N via colorimetry as above. This incubation technique precludes new C and N inputs and persistently leaches mineral N, forcing microorganisms to meet demand by mineralizing existing pools, and thereby directly assays the potential activity of soil organic C and N pools present at the time of soil sampling. Because this analysis commences with disrupting soil physical structure, it is biased toward higher estimates of active fractions. Calculations. Non-mobile C and N fractions were computed as total C and N concentrations minus the extractable and active fractions of each element. This data package reports surface-soil constituents (moisture, fines, SOM, and C and N pools and fractions) in both gravimetric units (mass constituent / mass soil) and areal units (mass constituent / soil surface area integrated through 7.6 cm soil depth, the depth of sampling). Areal concentrations were computed as X × D × 7.6, where X is the gravimetric concentration of a soil constituent, D is soil bulk density (g dry soil / cm^3), and 7.6 is the sampling depth in cm. 
    more » « less
  4. As Point Clouds (PCs) gain popularity in processing millions of data points for 3D rendering in many applications, efficient data compression becomes a critical issue. This is because compression is the primary bottleneck in minimizing the latency and energy consumption of existing PC pipelines. Data compression becomes even more critical as PC processing is pushed to edge devices with limited compute and power budgets. In this paper, we propose and evaluate two complementary schemes, intra-frame compression and inter-frame compression, to speed up the PC compression, without losing much quality or compression efficiency. Unlike existing techniques that use sequential algorithms, our first design, intra-frame compression, exploits parallelism for boosting the performance of both geometry and attribute compression. The proposed parallelism brings around 43.7× performance improvement and 96.6% energy savings at a cost of 1.01× larger compressed data size. To further improve the compression efficiency, our second scheme, inter-frame compression, considers the temporal similarity among the video frames and reuses the attribute data from the previous frame for the current frame. We implement our designs on an NVIDIA Jetson AGX Xavier edge GPU board. Experimental results with six videos show that the combined compression schemes provide 34.0× speedup compared to a state-of-the-art scheme, with minimal impact on quality and compression ratio. 
    more » « less
  5. Building and expanding on principles of statistics, machine learning, and scientific inquiry, we propose the predictability, computability, and stability (PCS) framework for veridical data science. Our framework, composed of both a workflow and documentation, aims to provide responsible, reliable, reproducible, and transparent results across the data science life cycle. The PCS workflow uses predictability as a reality check and considers the importance of computation in data collection/storage and algorithm design. It augments predictability and computability with an overarching stability principle. Stability expands on statistical uncertainty considerations to assess how human judgment calls impact data results through data and model/algorithm perturbations. As part of the PCS workflow, we develop PCS inference procedures, namely PCS perturbation intervals and PCS hypothesis testing, to investigate the stability of data results relative to problem formulation, data cleaning, modeling decisions, and interpretations. We illustrate PCS inference through neuroscience and genomics projects of our own and others. Moreover, we demonstrate its favorable performance over existing methods in terms of receiver operating characteristic (ROC) curves in high-dimensional, sparse linear model simulations, including a wide range of misspecified models. Finally, we propose PCS documentation based on R Markdown or Jupyter Notebook, with publicly available, reproducible codes and narratives to back up human choices made throughout an analysis. The PCS workflow and documentation are demonstrated in a genomics case study available on Zenodo. 
    more » « less