skip to main content


Title: An Algorithm for Nonparametric Estimation of a Multivariate Mixing Distribution with Applications to Population Pharmacokinetics
Population pharmacokinetic (PK) modeling has become a cornerstone of drug development and optimal patient dosing. This approach offers great benefits for datasets with sparse sampling, such as in pediatric patients, and can describe between-patient variability. While most current algorithms assume normal or log-normal distributions for PK parameters, we present a mathematically consistent nonparametric maximum likelihood (NPML) method for estimating multivariate mixing distributions without any assumption about the shape of the distribution. This approach can handle distributions with any shape for all PK parameters. It is shown in convexity theory that the NPML estimator is discrete, meaning that it has finite number of points with nonzero probability. In fact, there are at most N points where N is the number of observed subjects. The original infinite NPML problem then becomes the finite dimensional problem of finding the location and probability of the support points. In the simplest case, each point essentially represents the set of PK parameters for one patient. The probability of the points is found by a primal-dual interior-point method; the location of the support points is found by an adaptive grid method. Our method is able to handle high-dimensional and complex multivariate mixture models. An important application is discussed for the problem of population pharmacokinetics and a nontrivial example is treated. Our algorithm has been successfully applied in hundreds of published pharmacometric studies. In addition to population pharmacokinetics, this research also applies to empirical Bayes estimation and many other areas of applied mathematics. Thereby, this approach presents an important addition to the pharmacometric toolbox for drug development and optimal patient dosing.  more » « less
Award ID(s):
1908890
NSF-PAR ID:
10356744
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Pharmaceutics
Volume:
13
Issue:
1
ISSN:
1999-4923
Page Range / eLocation ID:
42
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The objective of this study was to describe the pharmacokinetics (PK) of intravenous phenobarbital in neonates and infants on extracorporeal membrane oxygenation (ECMO) and to provide dosing recommendations in this population. We performed a retrospective single‐center PK study of phenobarbital in neonates and infants on ECMO between January 1, 2014, and December 31, 2018. We developed a population PK model using nonlinear mixed‐effects modeling, performed simulations using the final PK parameters, and determined optimal dosing based on attainment of peak and trough concentrations between 20 and 40 mg/L. We included 35 subjects with a median (range) age and weight of 14 days (1–154 days) and 3.4 kg (1.6–8.1 kg), respectively. A total of 194 samples were included in the analysis. Five children (14%) contributing 30 samples (16%) were supported by continuous venovenous hemodiafiltration (CVVHDF). A 1‐compartment model best described the data. Typical clearance and volume of distribution for a 3.4–kg infant were 0.038 L/h and 3.83 L, respectively. Clearance increased with age and CVVHDF. Although on ECMO, phenobarbital clearance in children on CVVHDF was 6‐fold higher than clearance in children without CVVHDF. In typical subjects, a loading dose of 30 mg/kg/dose followed by maintenance doses of 6–7 mg/kg/day administered as divided doses every 12 hours reached goal concentrations. Age did not impact dosing recommendations. However, higher doses were needed in children on CVVHDF. We strongly recommend therapeutic drug monitoring in children on renal replacement therapy (excluding slow continuous ultrafiltration) while on ECMO.

     
    more » « less
  2. null (Ed.)
    Linear mixed-effects models play a fundamental role in statistical methodology. A variety of Markov chain Monte Carlo (MCMC) algorithms exist for fitting these models, but they are inefficient in massive data settings because every iteration of any such MCMC algorithm passes through the full data. Many divide-and-conquer methods have been proposed to solve this problem, but they lack theoretical guarantees, impose restrictive assumptions, or have complex computational algorithms. Our focus is one such method called the Wasserstein Posterior (WASP), which has become popular due to its optimal theoretical properties under general assumptions. Unfortunately, practical implementation of the WASP either requires solving a complex linear program or is limited to one-dimensional parameters. The former method is inefficient and the latter method fails to capture the joint posterior dependence structure of multivariate parameters. We develop a new algorithm for computing the WASP of multivariate parameters that is easy to implement and is useful for computing the WASP in any model where the posterior distribution of parameter belongs to a location-scatter family of probability measures. The algorithm is introduced for linear mixed-effects models with both implementation details and theoretical properties. Our algorithm outperforms the current state-of-the-art method in inference on the functions of the covariance matrix of the random effects across diverse numerical comparisons. 
    more » « less
  3. Abstract Purpose

    We consider the following scenario: A radiotherapy clinic has a limited number of proton therapy slots available each day to treat cancer patients of a given tumor site. The clinic's goal is to minimize the expected number of complications in the cohort of all patients of that tumor site treated at the clinic, and thereby maximize the benefit of its limited proton resources.

    Methods

    To address this problem, we extend the normal tissue complication probability (NTCP) model–based approach to proton therapy patient selection to the situation of limited resources at a given institution. We assume that, on each day, a newly diagnosed patient is scheduled for treatment at the clinic with some probability and with some benefit from protons over photons, which is drawn from a probability distribution. When a new patient is scheduled for treatment, a decision for protons or photons must be made, and a patient may wait only for a limited amount of time for a proton slot becoming available. The goal is to determine the thresholds for selecting a patient for proton therapy, which optimally balance the competing goals of making use of all available slots while not blocking slots with patients with low benefit. This problem can be formulated as a Markov decision process (MDP) and the optimal thresholds can be determined via a value‐policy iteration method.

    Results

    The optimal thresholds depend on the number of available proton slots, the average number of patients under treatment, and the distribution of values. In addition, the optimal thresholds depend on the current utilization of the facility. For example, if one proton slot is available and a second frees up shortly, the optimal threshold is lower compared to a situation where all but one slot remain blocked for longer.

    Conclusions

    MDP methodology can be used to augment current NTCP model–based patient selection methods to the situation that, on any given day, the number of proton slots is limited. The optimal threshold then depends on the current utilization of the proton facility. Although, the optimal policy yields only a small nominal benefit over a constant threshold, it is more robust against variations in patient load.

     
    more » « less
  4. Initializing simulations of deformable objects involves setting the rest state of all internal forces at the rest shape of the object. However, often times the rest shape is not explicitly provided. In its absence, it is common to initialize by treating the given initial shape as the rest shape. This leads to sagging, the undesirable deformation under gravity as soon as the simulation begins. Prior solutions to sagging are limited to specific simulation systems and material models, most of them cannot handle frictional contact, and they require solving expensive global nonlinear optimization problems. We introduce a novel solution to the sagging problem that can be applied to a variety of simulation systems and materials. The key feature of our approach is that we avoid solving a global nonlinear optimization problem by performing the initialization in two stages. First, we use a global linear optimization for static equilibrium. Any nonlinearity of the material definition is handled in the local stage, which solves many small local problems efficiently and in parallel. Notably, our method can properly handle frictional contact orders of magnitude faster than prior work. We show that our approach can be applied to various simulation systems by presenting examples with mass-spring systems, cloth simulations, the finite element method, the material point method, and position-based dynamics. 
    more » « less
  5. SUMMARY

    We aim to simultaneously infer the shape of subsurface structures and material properties such as density or viscosity from surface observations. Modelling mantle flow using incompressible instantaneous Stokes equations, the problem is formulated as an infinite-dimensional Bayesian inverse problem. Subsurface structures are described as level sets of a smooth auxiliary function, allowing for geometric flexibility. As inverting for subsurface structures from surface observations is inherently challenging, knowledge of plate geometries from seismic images is incorporated into the prior probability distributions. The posterior distribution is approximated using a dimension-robust Markov-chain Monte Carlo sampling method, allowing quantification of uncertainties in inferred parameters and shapes. The effectiveness of the method is demonstrated in two numerical examples with synthetic data. In a model with two higher-density sinkers, their shape and location are inferred with moderate uncertainty, but a trade-off between sinker size and density is found. The uncertainty in the inferred is significantly reduced by combining horizontal surface velocities and normal traction data. For a more realistic subduction problem, we construct tailored level-set priors, representing “seismic” knowledge and infer subducting plate geometry with their uncertainty. A trade-off between thickness and viscosity of the plate in the hinge zone is found, consistent with earlier work.

     
    more » « less