skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimization framework for patient‐specific modeling under uncertainty
Abstract Estimating a patient‐specific computational model's parameters relies on data that is often unreliable and ill‐suited for a deterministic approach. We develop an optimization‐based uncertainty quantification framework for probabilistic model tuning that discovers model inputs distributions that generate target output distributions. Probabilistic sampling is performed using a surrogate model for computational efficiency, and a general distribution parameterization is used to describe each input. The approach is tested on seven patient‐specific modeling examples using CircAdapt, a cardiovascular circulatory model. Six examples are synthetic, aiming to match the output distributions generated using known reference input data distributions, while the seventh example uses real‐world patient data for the output distributions. Our results demonstrate the accurate reproduction of the target output distributions, with a correct recreation of the reference inputs for the six synthetic examples. Our proposed approach is suitable for determining the parameter distributions of patient‐specific models with uncertain data and can be used to gain insights into the sensitivity of the model parameters to the measured data.  more » « less
Award ID(s):
1750865
PAR ID:
10442587
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
International Journal for Numerical Methods in Biomedical Engineering
Volume:
39
Issue:
2
ISSN:
2040-7939
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Global sensitivity analysis aims at quantifying and ranking the relative contribution of all the uncertain inputs of a mathematical model that impact the uncertainty in the output of that model, for any input-output mapping. Motivated by the limitations of the well-established Sobol' indices which are variance-based, there has been an interest in the development of non-moment-based global sensitivity metrics. This paper presents two complementary classes of metrics (one of which is a generalization of an already existing metric in the literature) which are based on the statistical distances between probability distributions rather than statistical moments. To alleviate the large computational cost associated with Monte Carlo sampling of the input-output model to estimate probability distributions, polynomial chaos surrogate models are proposed to be used. The surrogate models in conjunction with sparse quadrature-based rules, such as conjugate unscented transforms, permit efficient calculation of the proposed global sensitivity measures. Three benchmark sensitivity analysis examples are used to illustrate the proposed approach. 
    more » « less
  2. null (Ed.)
    With the increasing adoption of predictive models trained using machine learning across a wide range of high-stakes applications, e.g., health care, security, criminal justice, finance, and education, there is a growing need for effective techniques for explaining such models and their predictions. We aim to address this problem in settings where the predictive model is a black box; That is, we can only observe the response of the model to various inputs, but have no knowledge about the internal structure of the predictive model, its parameters, the objective function, and the algorithm used to optimize the model. We reduce the problem of interpreting a black box predictive model to that of estimating the causal effects of each of the model inputs on the model output, from observations of the model inputs and the corresponding outputs. We estimate the causal effects of model inputs on model output using variants of the Rubin Neyman potential outcomes framework for estimating causal effects from observational data. We show how the resulting causal attribution of responsibility for model output to the different model inputs can be used to interpret the predictive model and to explain its predictions. We present results of experiments that demonstrate the effectiveness of our approach to the interpretation of black box predictive models via causal attribution in the case of deep neural network models trained on one synthetic data set (where the input variables that impact the output variable are known by design) and two real-world data sets: Handwritten digit classification, and Parkinson's disease severity prediction. Because our approach does not require knowledge about the predictive model algorithm and is free of assumptions regarding the black box predictive model except that its input-output responses be observable, it can be applied, in principle, to any black box predictive model. 
    more » « less
  3. Abstract Objective. UNet-based deep-learning (DL) architectures are promising dose engines for traditional linear accelerator (Linac) models. Current UNet-based engines, however, were designed differently with various strategies, making it challenging to fairly compare the results from different studies. The objective of this study is to thoroughly evaluate the performance of UNet-based models on magnetic-resonance (MR)-Linac-based intensity-modulated radiation therapy (IMRT) dose calculations.Approach. The UNet-based models, including the standard-UNet, cascaded-UNet, dense-dilated-UNet, residual-UNet, HD-UNet, and attention-aware-UNet, were implemented. The model input is patient CT and IMRT field dose in water, and the output is patient dose calculated by DL model. The reference dose was calculated by the Monaco Monte Carlo module. Twenty training and ten test cases of prostate patients were included. The accuracy of the DL-calculated doses was measured using gamma analysis, and the calculation efficiency was evaluated by inference time.Results. All the studied models effectively corrected low-accuracy doses in water to high-accuracy patient doses in a magnetic field. The gamma passing rates between reference and DL-calculated doses were over 86% (1%/1 mm), 98% (2%/2 mm), and 99% (3%/3 mm) for all the models. The inference times ranged from 0.03 (graphics processing unit) to 7.5 (central processing unit) seconds. Each model demonstrated different strengths in calculation accuracy and efficiency; Res-UNet achieved the highest accuracy, HD-UNet offered high accuracy with the fewest parameters but the longest inference, dense-dilated-UNet was consistently accurate regardless of model levels, standard-UNet had the shortest inference but relatively lower accuracy, and the others showed average performance. Therefore, the best-performing model would depend on the specific clinical needs and available computational resources.Significance. The feasibility of using common UNet-based models for MR-Linac-based dose calculations has been explored in this study. By using the same model input type, patient training data, and computing environment, a fair assessment of the models’ performance was present. 
    more » « less
  4. Abstract Nonlinear response history analysis (NLRHA) is generally considered to be a reliable and robust method to assess the seismic performance of buildings under strong ground motions. While NLRHA is fairly straightforward to evaluate individual structures for a select set of ground motions at a specific building site, it becomes less practical for performing large numbers of analyses to evaluate either (1) multiple models of alternative design realizations with a site‐specific set of ground motions, or (2) individual archetype building models at multiple sites with multiple sets of ground motions. In this regard, surrogate models offer an alternative to running repeated NLRHAs for variable design realizations or ground motions. In this paper, a recently developed surrogate modeling technique, called probabilistic learning on manifolds (PLoM), is presented to estimate structural seismic response. Essentially, the PLoM method provides an efficient stochastic model to develop mappings between random variables, which can then be used to efficiently estimate the structural responses for systems with variations in design/modeling parameters or ground motion characteristics. The PLoM algorithm is introduced and then used in two case studies of 12‐story buildings for estimating probability distributions of structural responses. The first example focuses on the mapping between variable design parameters of a multidegree‐of‐freedom analysis model and its peak story drift and acceleration responses. The second example applies the PLoM technique to estimate structural responses for variations in site‐specific ground motion characteristics. In both examples, training data sets are generated for orthogonal input parameter grids, and test data sets are developed for input parameters with prescribed statistical distributions. Validation studies are performed to examine the accuracy and efficiency of the PLoM models. Overall, both examples show good agreement between the PLoM model estimates and verification data sets. Moreover, in contrast to other common surrogate modeling techniques, the PLoM model is able to preserve correlation structure between peak responses. Parametric studies are conducted to understand the influence of different PLoM tuning parameters on its prediction accuracy. 
    more » « less
  5. null (Ed.)
    Bleeding frequency and severity within clinical categories of hemophilia A are highly variable and the origin of this variation is unknown. Solving this mystery in coagulation requires the generation and analysis of large data sets comprised of experimental outputs or patient samples, both of which are subject to limited availability. In this review, we describe how a computationally driven approach bypasses such limitations by generating large synthetic patient data sets. These data sets were created with a mechanistic mathematical model, by varying the model inputs, clotting factor, and inhibitor concentrations, within normal physiological ranges. Specific mathematical metrics were chosen from the model output, used as a surrogate measure for bleeding severity, and statistically analyzed for further exploration and hypothesis generation. We highlight results from our recent study that employed this computationally driven approach to identify FV (factor V) as a key modifier of thrombin generation in mild to moderate hemophilia A, which was confirmed with complementary experimental assays. The mathematical model was used further to propose a potential mechanism for these observations whereby thrombin generation is rescued in FVIII-deficient plasma due to reduced substrate competition between FV and FVIII for FXa. 
    more » « less