Funding: This work was supported by the National Science Foundation [Grant 1761842].
Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0296 .
Funding: This work was supported by the National Science Foundation [Grant 1761842].
Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0296 .
An accurate estimation of the residual risk of transfusion‐transmittable infections (TTIs), which includes the human immunodeficiency virus (HIV), hepatitis B and C viruses (HBV, HCV), among others, is essential, as it provides the basis for blood screening assay selection. While the highly sensitive nucleic acid testing (NAT) technology has recently become available, it is highly costly. As a result, in most countries, including the United States, the current practice for human immunodeficiency virus, hepatitis B virus, hepatitis C virus screening in donated blood is to use pooled NAT. Pooling substantially reduces the number of tests required, especially for TTIs with low prevalence rates. However, pooling also reduces the test's sensitivity, because the viral load of an infected sample might be diluted by the other samples in the pool to the point that it is not detectable by NAT, leading to potential TTIs. Infection‐free blood may also be falsely discarded, resulting in wasted blood. We derive expressions for the residual risk, expected number of tests, and expected amount of blood wasted for various two‐stage pooled testing schemes, including Dorfman‐type and array‐based testing, considering infection progression, infectivity of the blood unit, and imperfect tests under the dilution effect and measurement errors. We then calibrate our model using published data and perform a case study. Our study offers key insights on how pooled NAT, used within different testing schemes, contributes to the safety and cost of blood. Copyright © 2016 John Wiley & Sons, Ltd.