skip to main content


Title: A robust pooled testing approach to expand COVID-19 screening capacity
Limited testing capacity for COVID-19 has hampered the pandemic response. Pooling is a testing method wherein samples from specimens (e.g., swabs) from multiple subjects are combined into a pool and screened with a single test. If the pool tests positive, then new samples from the collected specimens are individually tested, while if the pool tests negative, the subjects are classified as negative for the disease. Pooling can substantially expand COVID-19 testing capacity and throughput, without requiring additional resources. We develop a mathematical model to determine the best pool size for different risk groups , based on each group’s estimated COVID-19 prevalence. Our approach takes into consideration the sensitivity and specificity of the test, and a dynamic and uncertain prevalence, and provides a robust pool size for each group. For practical relevance, we also develop a companion COVID-19 pooling design tool (through a spread sheet). To demonstrate the potential value of pooling, we study COVID-19 screening using testing data from Iceland for the period, February-28-2020 to June-14-2020, for subjects stratified into high- and low-risk groups. We implement the robust pooling strategy within a sequential framework, which updates pool sizes each week, for each risk group, based on prior week’s testing data. Robust pooling reduces the number of tests, over individual testing, by 88.5% to 90.2%, and 54.2% to 61.9%, respectively, for the low-risk and high-risk groups (based on test sensitivity values in the range [0.71, 0.98] as reported in the literature). This results in much shorter times, on average, to get the test results compared to individual testing (due to the higher testing throughput), and also allows for expanded screening to cover more individuals. Thus, robust pooling can potentially be a valuable strategy for COVID-19 screening.  more » « less
Award ID(s):
2052575 1761842
NSF-PAR ID:
10274208
Author(s) / Creator(s):
; ; ;
Editor(s):
Pantea, Casian
Date Published:
Journal Name:
PLOS ONE
Volume:
16
Issue:
2
ISSN:
1932-6203
Page Range / eLocation ID:
e0246285
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Problem definition: Infectious disease screening can be expensive and capacity constrained. We develop cost- and capacity-efficient testing designs for multidisease screening, considering (1) multiplexing (disease bundling), where one assay detects multiple diseases using the same specimen (e.g., nasal swabs, blood), and (2) pooling (specimen bundling), where one assay is used on specimens from multiple subjects bundled in a testing pool. A testing design specifies an assay portfolio (mix of single-disease/multiplex assays) and a testing method (pooling/individual testing per assay). Methodology/results: We develop novel models for the nonlinear, combinatorial multidisease testing design problem: a deterministic model and a distribution-free, robust variation, which both generate Pareto frontiers for cost- and capacity-efficient designs. We characterize structural properties of optimal designs, formulate the deterministic counterpart of the robust model, and conduct a case study of respiratory diseases (including coronavirus disease 2019) with overlapping clinical presentation. Managerial implications: Key drivers of optimal designs include the assay cost function, the tester’s preference toward cost versus capacity efficiency, prevalence/coinfection rates, and for the robust model, prevalence uncertainty. When an optimal design uses multiple assays, it does so in conjunction with pooling, and it uses individual testing for at most one assay. Although prevalence uncertainty can be a design hurdle, especially for emerging or seasonal diseases, the integration of multiplexing and pooling, and the ordered partition property of optimal designs (under certain coinfection structures) serve to make the design more structurally robust to uncertainty. The robust model further increases robustness, and it is also practical as it needs only an uncertainty set around each disease prevalence. Our Pareto designs demonstrate the cost versus capacity trade-off and show that multiplexing-only or pooling-only designs need not be on the Pareto frontier. Our case study illustrates the benefits of optimally integrated designs over current practices and indicates a low price of robustness.

    Funding: This work was supported by the National Science Foundation [Grant 1761842].

    Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0296 .

     
    more » « less
  2. null (Ed.)
    Background Conventional diagnosis of COVID-19 with reverse transcription polymerase chain reaction (RT-PCR) testing (hereafter, PCR) is associated with prolonged time to diagnosis and significant costs to run the test. The SARS-CoV-2 virus might lead to characteristic patterns in the results of widely available, routine blood tests that could be identified with machine learning methodologies. Machine learning modalities integrating findings from these common laboratory test results might accelerate ruling out COVID-19 in emergency department patients. Objective We sought to develop (ie, train and internally validate with cross-validation techniques) and externally validate a machine learning model to rule out COVID 19 using only routine blood tests among adults in emergency departments. Methods Using clinical data from emergency departments (EDs) from 66 US hospitals before the pandemic (before the end of December 2019) or during the pandemic (March-July 2020), we included patients aged ≥20 years in the study time frame. We excluded those with missing laboratory results. Model training used 2183 PCR-confirmed cases from 43 hospitals during the pandemic; negative controls were 10,000 prepandemic patients from the same hospitals. External validation used 23 hospitals with 1020 PCR-confirmed cases and 171,734 prepandemic negative controls. The main outcome was COVID 19 status predicted using same-day routine laboratory results. Model performance was assessed with area under the receiver operating characteristic (AUROC) curve as well as sensitivity, specificity, and negative predictive value (NPV). Results Of 192,779 patients included in the training, external validation, and sensitivity data sets (median age decile 50 [IQR 30-60] years, 40.5% male [78,249/192,779]), AUROC for training and external validation was 0.91 (95% CI 0.90-0.92). Using a risk score cutoff of 1.0 (out of 100) in the external validation data set, the model achieved sensitivity of 95.9% and specificity of 41.7%; with a cutoff of 2.0, sensitivity was 92.6% and specificity was 59.9%. At the cutoff of 2.0, the NPVs at a prevalence of 1%, 10%, and 20% were 99.9%, 98.6%, and 97%, respectively. Conclusions A machine learning model developed with multicenter clinical data integrating commonly collected ED laboratory data demonstrated high rule-out accuracy for COVID-19 status, and might inform selective use of PCR-based testing. 
    more » « less
  3. An accurate estimation of the residual risk of transfusion‐transmittable infections (TTIs), which includes the human immunodeficiency virus (HIV), hepatitis B and C viruses (HBV, HCV), among others, is essential, as it provides the basis for blood screening assay selection. While the highly sensitive nucleic acid testing (NAT) technology has recently become available, it is highly costly. As a result, in most countries, including the United States, the current practice for human immunodeficiency virus, hepatitis B virus, hepatitis C virus screening in donated blood is to use pooled NAT. Pooling substantially reduces the number of tests required, especially for TTIs with low prevalence rates. However, pooling also reduces the test's sensitivity, because the viral load of an infected sample might be diluted by the other samples in the pool to the point that it is not detectable by NAT, leading to potential TTIs. Infection‐free blood may also be falsely discarded, resulting in wasted blood. We derive expressions for the residual risk, expected number of tests, and expected amount of blood wasted for various two‐stage pooled testing schemes, including Dorfman‐type and array‐based testing, considering infection progression, infectivity of the blood unit, and imperfect tests under the dilution effect and measurement errors. We then calibrate our model using published data and perform a case study. Our study offers key insights on how pooled NAT, used within different testing schemes, contributes to the safety and cost of blood. Copyright © 2016 John Wiley & Sons, Ltd.

     
    more » « less
  4. Faeder, James R. (Ed.)
    The rapid spread of SARS-CoV-2 has placed a significant burden on public health systems to provide swift and accurate diagnostic testing highlighting the critical need for innovative testing approaches for future pandemics. In this study, we present a novel sample pooling procedure based on compressed sensing theory to accurately identify virally infected patients at high prevalence rates utilizing an innovative viral RNA extraction process to minimize sample dilution. At prevalence rates ranging from 0–14.3%, the number of tests required to identify the infection status of all patients was reduced by 69.26% as compared to conventional testing in primary human SARS-CoV-2 nasopharyngeal swabs and a coronavirus model system. Our method provided quantification of individual sample viral load within a pool as well as a binary positive-negative result. Additionally, our modified pooling and RNA extraction process minimized sample dilution which remained constant as pool sizes increased. Compressed sensing can be adapted to a wide variety of diagnostic testing applications to increase throughput for routine laboratory testing as well as a means to increase testing capacity to combat future pandemics. 
    more » « less
  5. Abstract

    Mass testing is essential for identifying infected individuals during an epidemic and allowing healthy individuals to return to normal social activities. However, testing capacity is often insufficient to meet global health needs, especially during newly emerging epidemics. Dorfman’s method, a classic group testing technique, helps reduce the number of tests required by pooling the samples of multiple individuals into a single sample for analysis. Dorfman’s method does not consider the time dynamics or limits on testing capacity involved in infection detection, and it assumes that individuals are infected independently, ignoring community correlations. To address these limitations, we present an adaptive group testing (AGT) strategy based on graph partitioning, which divides a physical contact network into subgraphs (groups of individuals) and assigns testing priorities based on the social contact characteristics of each subgraph. Our AGT aims to maximize the number of infected individuals detected and minimize the number of tests required. After each testing round (perhaps on a daily basis), the testing priority is increased for each neighboring group of known infected individuals. We also present an enhanced infectious disease transmission model that simulates the dynamic spread of a pathogen and evaluate our AGT strategy using the simulation results. When applied to 13 social contact networks, AGT demonstrates significant performance improvements compared to Dorfman’s method and its variations. Our AGT strategy requires fewer tests overall, reduces disease spread, and retains robustness under changes in group size, testing capacity, and other parameters. Testing plays a crucial role in containing and mitigating pandemics by identifying infected individuals and helping to prevent further transmission in families and communities. By identifying infected individuals and helping to prevent further transmission in families and communities, our AGT strategy can have significant implications for public health, providing guidance for policymakers trying to balance economic activity with the need to manage the spread of infection.

     
    more » « less