skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Advantages of Monte Carlo Confidence Intervals for Incremental Cost-Effectiveness Ratios: A Comparison of Five Methods
Cost-effectiveness analysis studies in education often prioritize descriptive statistics of cost-effectiveness measures, such as the point estimate of the incremental cost-effectiveness ratio (ICER), while neglecting inferential statistics like confidence intervals (CIs). Without CIs, it becomes impossible to make meaningful comparisons of alternative educational strategies, as there is no basis for assessing the uncertainty of point estimates or the plausible range of ICERs. This study is designed to evaluate the relative performance of five methods of constructing CIs for ICERs in randomized controlled trials with cost-effectiveness analyses. We found that the Monte Carlo interval method based on summary statistics consistently performed well regarding coverage, width, and symmetry. It yielded estimates comparable to the percentile bootstrap method across multiple scenarios. In contrast, Fieller’s method did not work well with small sample sizes and treatment effects. Further, Taylor’s method and the Box method performed least well. We discussed two-sided and one-sided hypothesis testing based on ICER CIs, developed tools for calculating these ICER CIs, and demonstrated the calculation using an empirical example. We concluded with suggestions for applications and extensions of this work.  more » « less
Award ID(s):
2000705
PAR ID:
10629093
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Taylor & Francis
Date Published:
Journal Name:
Journal of Research on Educational Effectiveness
ISSN:
1934-5747
Page Range / eLocation ID:
1 to 29
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. BackgroundMetamodels can address some of the limitations of complex simulation models by formulating a mathematical relationship between input parameters and simulation model outcomes. Our objective was to develop and compare the performance of a machine learning (ML)–based metamodel against a conventional metamodeling approach in replicating the findings of a complex simulation model. MethodsWe constructed 3 ML-based metamodels using random forest, support vector regression, and artificial neural networks and a linear regression-based metamodel from a previously validated microsimulation model of the natural history hepatitis C virus (HCV) consisting of 40 input parameters. Outcomes of interest included societal costs and quality-adjusted life-years (QALYs), the incremental cost-effectiveness (ICER) of HCV treatment versus no treatment, cost-effectiveness analysis curve (CEAC), and expected value of perfect information (EVPI). We evaluated metamodel performance using root mean squared error (RMSE) and Pearson’s R2on the normalized data. ResultsThe R2values for the linear regression metamodel for QALYs without treatment, QALYs with treatment, societal cost without treatment, societal cost with treatment, and ICER were 0.92, 0.98, 0.85, 0.92, and 0.60, respectively. The corresponding R2values for our ML-based metamodels were 0.96, 0.97, 0.90, 0.95, and 0.49 for support vector regression; 0.99, 0.83, 0.99, 0.99, and 0.82 for artificial neural network; and 0.99, 0.99, 0.99, 0.99, and 0.98 for random forest. Similar trends were observed for RMSE. The CEAC and EVPI curves produced by the random forest metamodel matched the results of the simulation output more closely than the linear regression metamodel. ConclusionsML-based metamodels generally outperformed traditional linear regression metamodels at replicating results from complex simulation models, with random forest metamodels performing best. HighlightsDecision-analytic models are frequently used by policy makers and other stakeholders to assess the impact of new medical technologies and interventions. However, complex models can impose limitations on conducting probabilistic sensitivity analysis and value-of-information analysis, and may not be suitable for developing online decision-support tools. Metamodels, which accurately formulate a mathematical relationship between input parameters and model outcomes, can replicate complex simulation models and address the above limitation. The machine learning–based random forest model can outperform linear regression in replicating the findings of a complex simulation model. Such a metamodel can be used for conducting cost-effectiveness and value-of-information analyses or developing online decision support tools. 
    more » « less
  2. Paper-based analytical devices (PADs) offer a low-cost, user-friendly platform for rapid point-of-use testing. Without scalable fabrication methods, however, few PADs make it out of the academic laboratory and into the hands of end users. Previously, wax printing was considered an ideal PAD fabrication method, but given that wax printers are no longer commercially available, alternatives are needed. Here, we present one such alternative: the air-gap PAD. Air-gap PADs consist of hydrophilic paper test zones, separated by “air gaps” and affixed to a hydrophobic backing with double-sided adhesive. The primary appeal of this design is its compatibility with roll-to-roll equipment for large-scale manufacturing. In this study, we examine design considerations for air-gap PADs, compare the performance of wax-printed and air-gap PADs, and report on a pilot-scale roll-to-roll production run of air-gap PADs in partnership with a commercial test-strip manufacturer. Air-gap devices performed comparably to their wax-printed counterparts in Washburn flow experiments, a paper-based titration, and a 12-lane pharmaceutical screening device. Using roll-to-roll manufacturing, we produced 2700 feet of air-gap PADs for as little as $0.03 per PAD. 
    more » « less
  3. The number of non-negative integer matrices with given row and column sums features in a variety of problems in mathematics and statistics but no closed-form expression for it is known, so we rely on approximations. In this paper, we describe a new such approximation, motivated by consideration of the statistics of matrices with non-integer numbers of columns. This estimate can be evaluated in time linear in the size of the matrix and returns results of accuracy as good as or better than existing linear-time approximations across a wide range of settings. We show that the estimate is asymptotically exact in the regime of sparse tables, while empirically performing at least as well as other linear-time estimates in the regime of dense tables. We also use the new estimate as the starting point for an improved numerical method for either counting or sampling matrices with given margins using sequential importance sampling. Code implementing our methods is available. 
    more » « less
  4. In this paper we demonstrate that it is possible to discriminate between high level motion types such as walking, jogging, or running based on just the change in the relational statistics among the detected image features, without the need for object models, perfect segmentation, or tracking. Instead of the statistics of the feature attributes themselves, we consider the distribution of the statistics of the relations among the features. We represent the observed distribution of feature relations in an image as a point in a space where the Euclidean distance is related to the Bhattacharya distance between probability functions. Different motion types sweep out different traces in this Space of Probability Functions (SoPF). We demonstrate the effectiveness of this representation on image sequences of human in motion, gathered using a digital video camera. We show that it is not only possible to distinguish between motion types but also to discriminate between persons based on the SoPF traces. 
    more » « less
  5. Given a random sample of size n from a p dimensional random vector, we are interested in testing whether the p components of the random vector are mutually independent. This is the so-called complete independence test. In the multivariate normal case, it is equivalent to testing whether the correlation matrix is an identity matrix. In this paper, we propose a one-sided empirical likelihood method for the complete independence test based on squared sample correlation coefficients. The limiting distribution for our one-sided empirical likelihood test statistic is proved to be Z^2I(Z > 0) when both n and p tend to infinity, where Z is a standard normal random variable. In order to improve the power of the empirical likelihood test statistic, we also introduce a rescaled empirical likelihood test statistic. We carry out an extensive simulation study to compare the performance of the rescaled empirical likelihood method and two other statistics. 
    more » « less