skip to main content


Title: A robust pooled testing approach to expand COVID-19 screening capacity
Limited testing capacity for COVID-19 has hampered the pandemic response. Pooling is a testing method wherein samples from specimens (e.g., swabs) from multiple subjects are combined into a pool and screened with a single test. If the pool tests positive, then new samples from the collected specimens are individually tested, while if the pool tests negative, the subjects are classified as negative for the disease. Pooling can substantially expand COVID-19 testing capacity and throughput, without requiring additional resources. We develop a mathematical model to determine the best pool size for different risk groups , based on each group’s estimated COVID-19 prevalence. Our approach takes into consideration the sensitivity and specificity of the test, and a dynamic and uncertain prevalence, and provides a robust pool size for each group. For practical relevance, we also develop a companion COVID-19 pooling design tool (through a spread sheet). To demonstrate the potential value of pooling, we study COVID-19 screening using testing data from Iceland for the period, February-28-2020 to June-14-2020, for subjects stratified into high- and low-risk groups. We implement the robust pooling strategy within a sequential framework, which updates pool sizes each week, for each risk group, based on prior week’s testing data. Robust pooling reduces the number of tests, over individual testing, by 88.5% to 90.2%, and 54.2% to 61.9%, respectively, for the low-risk and high-risk groups (based on test sensitivity values in the range [0.71, 0.98] as reported in the literature). This results in much shorter times, on average, to get the test results compared to individual testing (due to the higher testing throughput), and also allows for expanded screening to cover more individuals. Thus, robust pooling can potentially be a valuable strategy for COVID-19 screening.  more » « less
Award ID(s):
2052575 1761842
NSF-PAR ID:
10274208
Author(s) / Creator(s):
; ; ;
Editor(s):
Pantea, Casian
Date Published:
Journal Name:
PLOS ONE
Volume:
16
Issue:
2
ISSN:
1932-6203
Page Range / eLocation ID:
e0246285
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Problem definition: Infectious disease screening can be expensive and capacity constrained. We develop cost- and capacity-efficient testing designs for multidisease screening, considering (1) multiplexing (disease bundling), where one assay detects multiple diseases using the same specimen (e.g., nasal swabs, blood), and (2) pooling (specimen bundling), where one assay is used on specimens from multiple subjects bundled in a testing pool. A testing design specifies an assay portfolio (mix of single-disease/multiplex assays) and a testing method (pooling/individual testing per assay). Methodology/results: We develop novel models for the nonlinear, combinatorial multidisease testing design problem: a deterministic model and a distribution-free, robust variation, which both generate Pareto frontiers for cost- and capacity-efficient designs. We characterize structural properties of optimal designs, formulate the deterministic counterpart of the robust model, and conduct a case study of respiratory diseases (including coronavirus disease 2019) with overlapping clinical presentation. Managerial implications: Key drivers of optimal designs include the assay cost function, the tester’s preference toward cost versus capacity efficiency, prevalence/coinfection rates, and for the robust model, prevalence uncertainty. When an optimal design uses multiple assays, it does so in conjunction with pooling, and it uses individual testing for at most one assay. Although prevalence uncertainty can be a design hurdle, especially for emerging or seasonal diseases, the integration of multiplexing and pooling, and the ordered partition property of optimal designs (under certain coinfection structures) serve to make the design more structurally robust to uncertainty. The robust model further increases robustness, and it is also practical as it needs only an uncertainty set around each disease prevalence. Our Pareto designs demonstrate the cost versus capacity trade-off and show that multiplexing-only or pooling-only designs need not be on the Pareto frontier. Our case study illustrates the benefits of optimally integrated designs over current practices and indicates a low price of robustness.

    Funding: This work was supported by the National Science Foundation [Grant 1761842].

    Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0296 .

     
    more » « less
  2. Faeder, James R. (Ed.)
    The rapid spread of SARS-CoV-2 has placed a significant burden on public health systems to provide swift and accurate diagnostic testing highlighting the critical need for innovative testing approaches for future pandemics. In this study, we present a novel sample pooling procedure based on compressed sensing theory to accurately identify virally infected patients at high prevalence rates utilizing an innovative viral RNA extraction process to minimize sample dilution. At prevalence rates ranging from 0–14.3%, the number of tests required to identify the infection status of all patients was reduced by 69.26% as compared to conventional testing in primary human SARS-CoV-2 nasopharyngeal swabs and a coronavirus model system. Our method provided quantification of individual sample viral load within a pool as well as a binary positive-negative result. Additionally, our modified pooling and RNA extraction process minimized sample dilution which remained constant as pool sizes increased. Compressed sensing can be adapted to a wide variety of diagnostic testing applications to increase throughput for routine laboratory testing as well as a means to increase testing capacity to combat future pandemics. 
    more » « less
  3. null (Ed.)
    Background Conventional diagnosis of COVID-19 with reverse transcription polymerase chain reaction (RT-PCR) testing (hereafter, PCR) is associated with prolonged time to diagnosis and significant costs to run the test. The SARS-CoV-2 virus might lead to characteristic patterns in the results of widely available, routine blood tests that could be identified with machine learning methodologies. Machine learning modalities integrating findings from these common laboratory test results might accelerate ruling out COVID-19 in emergency department patients. Objective We sought to develop (ie, train and internally validate with cross-validation techniques) and externally validate a machine learning model to rule out COVID 19 using only routine blood tests among adults in emergency departments. Methods Using clinical data from emergency departments (EDs) from 66 US hospitals before the pandemic (before the end of December 2019) or during the pandemic (March-July 2020), we included patients aged ≥20 years in the study time frame. We excluded those with missing laboratory results. Model training used 2183 PCR-confirmed cases from 43 hospitals during the pandemic; negative controls were 10,000 prepandemic patients from the same hospitals. External validation used 23 hospitals with 1020 PCR-confirmed cases and 171,734 prepandemic negative controls. The main outcome was COVID 19 status predicted using same-day routine laboratory results. Model performance was assessed with area under the receiver operating characteristic (AUROC) curve as well as sensitivity, specificity, and negative predictive value (NPV). Results Of 192,779 patients included in the training, external validation, and sensitivity data sets (median age decile 50 [IQR 30-60] years, 40.5% male [78,249/192,779]), AUROC for training and external validation was 0.91 (95% CI 0.90-0.92). Using a risk score cutoff of 1.0 (out of 100) in the external validation data set, the model achieved sensitivity of 95.9% and specificity of 41.7%; with a cutoff of 2.0, sensitivity was 92.6% and specificity was 59.9%. At the cutoff of 2.0, the NPVs at a prevalence of 1%, 10%, and 20% were 99.9%, 98.6%, and 97%, respectively. Conclusions A machine learning model developed with multicenter clinical data integrating commonly collected ED laboratory data demonstrated high rule-out accuracy for COVID-19 status, and might inform selective use of PCR-based testing. 
    more » « less
  4. Population-scale and rapid testing for SARS-CoV-2 continues to be a priority for several parts of the world. We revisit the in vitro technology platforms for COVID-19 testing and diagnostics—molecular tests and rapid antigen tests, serology or antibody tests, and tests for the management of COVID-19 patients. Within each category of tests, we review the commercialized testing platforms, their analyzing systems, specimen collection protocols, testing methodologies, supply chain logistics, and related attributes. Our discussion is essentially focused on test products that have been granted emergency use authorization by the FDA to detect and diagnose COVID-19 infections. Different strategies for scaled-up and faster screening are covered here, such as pooled testing, screening programs, and surveillance testing. The near-term challenges lie in detecting subtle infectivity profiles, mapping the transmission dynamics of new variants, lowering the cost for testing, training a large healthcare workforce, and providing test kits for the masses. Through this review, we try to understand the feasibility of universal access to COVID-19 testing and diagnostics in the near future while being cognizant of the implicit tradeoffs during the development and distribution cycles of new testing platforms. 
    more » « less
  5. An accurate estimation of the residual risk of transfusion‐transmittable infections (TTIs), which includes the human immunodeficiency virus (HIV), hepatitis B and C viruses (HBV, HCV), among others, is essential, as it provides the basis for blood screening assay selection. While the highly sensitive nucleic acid testing (NAT) technology has recently become available, it is highly costly. As a result, in most countries, including the United States, the current practice for human immunodeficiency virus, hepatitis B virus, hepatitis C virus screening in donated blood is to use pooled NAT. Pooling substantially reduces the number of tests required, especially for TTIs with low prevalence rates. However, pooling also reduces the test's sensitivity, because the viral load of an infected sample might be diluted by the other samples in the pool to the point that it is not detectable by NAT, leading to potential TTIs. Infection‐free blood may also be falsely discarded, resulting in wasted blood. We derive expressions for the residual risk, expected number of tests, and expected amount of blood wasted for various two‐stage pooled testing schemes, including Dorfman‐type and array‐based testing, considering infection progression, infectivity of the blood unit, and imperfect tests under the dilution effect and measurement errors. We then calibrate our model using published data and perform a case study. Our study offers key insights on how pooled NAT, used within different testing schemes, contributes to the safety and cost of blood. Copyright © 2016 John Wiley & Sons, Ltd.

     
    more » « less