Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract PurposeAs a challenging but important optimization problem, the inverse planning for volumetric modulated arc therapy (VMAT) has attracted much research attention. The column generation (CG) type method is so far one of the most effective solution schemes. However, it often relies on simplifications leading to significant gaps between the output and the actual feasible plan. This paper presents a novel column generation (NCG) approach to push the planning results substantially closer to practice. MethodsThe proposed NCG algorithm is equipped with multiple new quality‐enhancing and computation‐facilitating modules as below: (1) Flexible constraints are enabled on both dose rates and treatment time to adapt to machine capabilities as well as planner's preferences, respectively; (2) a cross‐control‐point intermediate aperture simulation is incorporated to better conform to the underlying physics; (3) new pricing and pruning subroutines are adopted to achieve better optimization outputs. To evaluate the effectiveness of this NCG, five VMAT plans, that is, three prostate cases and two head‐and‐neck cases, were computed using proposed NCG. The planning results were compared with those yielded by a historical benchmark planning scheme. ResultsThe NCG generated plans of significantly better quality than the benchmark planning algorithm. For prostate cases, NCG plans satisfied all planning target volume (PTV) criteria whereas CG plans failed on D10% criteria of PTVs for over 9 Gy or more on all cases. For head‐and‐neck cases, again, NCG plans satisfied all PTVs criteria while CG plans failed on D10% criteria of PTVs for over 3 Gy or more on all cases as well as the max dose criteria of both cord and brain stem for over 13 Gy on one case. Moreover, the pruning scheme was found to be effective in enhancing the optimization quality. ConclusionsThe proposed NCG inherits the computational advantages of the traditional CG, while capturing a more realistic characterization of the machine capability and underlying physics. The output solutions of the NCG are substantially closer to practical implementation.more » « less
-
Abstract PurposeMost commercially available treatment planning systems (TPSs) approximate the continuous delivery of volumetric modulated arc therapy (VMAT) plans with a series of discretized static beams for treatment planning, which can make VMAT dose computation extremely inefficient. In this study, we developed a polar‐coordinate‐based pencil beam (PB) algorithm for efficient VMAT dose computation with high‐resolution gantry angle sampling that can improve the computational efficiency and reduce the dose discrepancy due to the angular under‐sampling effect. Methods and Materials6 MV pencil beams were simulated on a uniform cylindrical phantom under an EGSnrc Monte Carlo (MC) environment. The MC‐generated PB kernels were collected in the polar coordinate system for each bixel on a fluence map and subsequently fitted via a series of Gaussians. The fluence was calculated using a detectors’ eye view with off‐axis and MLC transmission factors corrected. Doses of VMAT arc on the phantom were computed by summing the convolution results between the corresponding PB kernels and fluence for each bixel in the polar coordinate system. The convolution was performed using fast Fourier transform to expedite the computing speed. The calculated doses were converted to the Cartesian coordinate system and compared with the reference dose computed by a collapsed cone convolution (CCC) algorithm of the TPS. A heterogeneous phantom was created to study the heterogeneity corrections using the proposed algorithm. Ten VMAT arcs were included to evaluate the algorithm performance. Gamma analysis and computation complexity theory were used to measure the dosimetric accuracy and computational efficiency, respectively. ResultsThe dosimetric comparisons on the homogeneous phantom between the proposed PB algorithm and the CCC algorithm for 10 VMAT arcs demonstrate that the proposed algorithm can achieve a dosimetric accuracy comparable to that of the CCC algorithm with average gamma passing rates of 96% (2%/2mm) and 98% (3%/3mm). In addition, the proposed algorithm can provide better computational efficiency for VMAT dose computation using a PC equipped with a 4‐core processor, compared to the CCC algorithm utilizing a dual 10‐core server. Moreover, the computation complexity theory reveals that the proposed algorithm has a great advantage with regard to computational efficiency for VMAT dose computation on homogeneous medium, especially when a fine angular sampling rate is applied. This can support a reduction in dose errors from the angular under‐sampling effect by using a finer angular sampling rate, while still preserving a practical computing speed. For dose calculation on the heterogeneous phantom, the proposed algorithm with heterogeneity corrections can still offer a reasonable dosimetric accuracy with comparable computational efficiency to that of the CCC algorithm. ConclusionsWe proposed a novel polar‐coordinate‐based pencil beam algorithm for VMAT dose computation that enables a better computational efficiency while maintaining clinically acceptable dosimetric accuracy and reducing dose error caused by the angular under‐sampling effect. It also provides a flexible VMAT dose computation structure that allows adjustable sampling rates and direct dose computation in regions of interest, which makes the algorithm potentially useful for clinical applications such as independent dose verification for VMAT patient‐specific QA.more » « less
-
Abstract Objective. UNet-based deep-learning (DL) architectures are promising dose engines for traditional linear accelerator (Linac) models. Current UNet-based engines, however, were designed differently with various strategies, making it challenging to fairly compare the results from different studies. The objective of this study is to thoroughly evaluate the performance of UNet-based models on magnetic-resonance (MR)-Linac-based intensity-modulated radiation therapy (IMRT) dose calculations.Approach. The UNet-based models, including the standard-UNet, cascaded-UNet, dense-dilated-UNet, residual-UNet, HD-UNet, and attention-aware-UNet, were implemented. The model input is patient CT and IMRT field dose in water, and the output is patient dose calculated by DL model. The reference dose was calculated by the Monaco Monte Carlo module. Twenty training and ten test cases of prostate patients were included. The accuracy of the DL-calculated doses was measured using gamma analysis, and the calculation efficiency was evaluated by inference time.Results. All the studied models effectively corrected low-accuracy doses in water to high-accuracy patient doses in a magnetic field. The gamma passing rates between reference and DL-calculated doses were over 86% (1%/1 mm), 98% (2%/2 mm), and 99% (3%/3 mm) for all the models. The inference times ranged from 0.03 (graphics processing unit) to 7.5 (central processing unit) seconds. Each model demonstrated different strengths in calculation accuracy and efficiency; Res-UNet achieved the highest accuracy, HD-UNet offered high accuracy with the fewest parameters but the longest inference, dense-dilated-UNet was consistently accurate regardless of model levels, standard-UNet had the shortest inference but relatively lower accuracy, and the others showed average performance. Therefore, the best-performing model would depend on the specific clinical needs and available computational resources.Significance. The feasibility of using common UNet-based models for MR-Linac-based dose calculations has been explored in this study. By using the same model input type, patient training data, and computing environment, a fair assessment of the models’ performance was present.more » « less
-
New Sample Complexity Bounds for Sample Average Approximation in Heavy-Tailed Stochastic ProgrammingThis paper studies sample average approximation (SAA) and its simple regularized variation in solving convex or strongly convex stochastic programming problems. Under heavy-tailed assumptions and comparable regularity conditions as in the typical SAA literature, we show — perhaps for the first time — that the sample complexity can be completely free from any complexity measure (e.g., logarithm of the covering number) of the feasible region. As a result, our new bounds can be more advantageous than the state-of-the-art in terms of the dependence on the problem dimensionality.more » « less
-
Abstract Objective . Deep-learning (DL)-based dose engines have been developed to alleviate the intrinsic compromise between the calculation accuracy and efficiency of the traditional dose calculation algorithms. However, current DL-based engines typically possess high computational complexity and require powerful computing devices. Therefore, to mitigate their computational burdens and broaden their applicability to a clinical setting where resource-limited devices are available, we proposed a compact dose engine via knowledge distillation (KD) framework that offers an ultra-fast calculation speed with high accuracy for prostate Volumetric Modulated Arc Therapy (VMAT). Approach . The KD framework contains two sub-models: a large pre-trained teacher and a small to-be-trained student. The student receives knowledge transferred from the teacher for better generalization. The trained student serves as the final engine for dose calculation. The model input is patient computed tomography and VMAT dose in water, and the output is DL-calculated patient dose. The ground-truth \dose was computed by the Monte Carlo module of the Monaco treatment planning system. Twenty and ten prostate cases were included for model training and assessment, respectively. The model’s performance (teacher/student/student-only) was evaluated by Gamma analysis and inference efficiency. Main results . The dosimetric comparisons (input/DL-calculated/ground-truth doses) suggest that the proposed engine can effectively convert low-accuracy doses in water to high-accuracy patient doses. The Gamma passing rate (2%/2 mm, 10% threshold) between the DL-calculated and ground-truth doses was 98.64 ± 0.62% (teacher), 98.13 ± 0.76% (student), and 96.95 ± 1.02% (student-only). The inference time was 16 milliseconds (teacher) and 11 milliseconds (student/student-only) using a graphics processing unit device, while it was 936 milliseconds (teacher) and 374 milliseconds (student/student-only) using a central processing unit device. Significance . With the KD framework, a compact dose engine can achieve comparable accuracy to that of a larger one. Its compact size reduces the computational burdens and computing device requirements, and thus such an engine can be more clinically applicable.more » « less
-
High-dimensional statistical learning (HDSL) has wide applications in data analysis, operations research, and decision making. Despite the availability of multiple theoretical frameworks, most existing HDSL schemes stipulate the following two conditions: (a) the sparsity and (b) restricted strong convexity (RSC). This paper generalizes both conditions via the use of the folded concave penalty (FCP). More specifically, we consider an M-estimation problem where (i) (conventional) sparsity is relaxed into the approximate sparsity and (ii) RSC is completely absent. We show that the FCP-based regularization leads to poly-logarithmic sample complexity; the training data size is only required to be poly-logarithmic in the problem dimensionality. This finding can facilitate the analysis of two important classes of models that are currently less understood: high-dimensional nonsmooth learning and (deep) neural networks (NNs). For both problems, we show that poly-logarithmic sample complexity can be maintained. In particular, our results indicate that the generalizability of NNs under overparameterization can be theoretically ensured with the aid of regularization.more » « less
An official website of the United States government

Full Text Available