Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            ABSTRACT A three‐dimensional convolutional neural network (3D‐CNN) was developed for the analysis of volumetric optical coherence tomography (OCT) images to enhance endoscopic guidance during percutaneous nephrostomy. The model was performance‐benchmarked using a 10‐fold nested cross‐validation procedure and achieved an average test accuracy of 90.57% across a dataset of 10 porcine kidneys. This performance significantly exceeded that of 2D‐CNN models that attained average test accuracies ranging from 85.63% to 88.22% using 1, 10, or 100 radial sections extracted from the 3D OCT volumes. The 3D‐CNN (~12 million parameters) was benchmarked against three state‐of‐the‐art volumetric architectures: the 3D Vision Transformer (3D‐ViT, ~45 million parameters), 3D‐DenseNet121 (~12 million parameters), and the Multi‐plane and Multi‐slice Transformer (M3T, ~29 million parameters). While these models achieved comparable inferencing accuracy, the 3D‐CNN exhibited lower inference latency (33 ms) than 3D‐ViT (86 ms), 3D‐DenseNet121 (58 ms), and M3T (93 ms), representing a critical advantage for real‐time surgical guidance applications. These results demonstrate the 3D‐CNN's capability as a powerful and practical tool for computer‐aided diagnosis in OCT‐guided surgical interventions.more » « lessFree, publicly-accessible full text available July 25, 2026
- 
            Abstract Epidural anesthesia helps manage pain during different surgeries. Nonetheless, the precise placement of the epidural needle remains a challenge. In this study, we developed a probe based on polarization‐sensitive optical coherence tomography (PS‐OCT) to enhance the epidural anesthesia needle placement. The probe was tested on six porcine spinal samples. The multimodal imaging guidance used the OCT intensity mode and three distinct PS‐OCT modes: (1) phase retardation, (2) optic axis, and (3) degree of polarization uniformity (DOPU). Each mode enabled the classification of different epidural tissues through distinct imaging characteristics. To further streamline the tissue recognition procedure, convolutional neural network (CNN) were used to autonomously identify the tissue types within the probe's field of view. ResNet50 models were developed for all four imaging modes. DOPU imaging was found to provide the highest cross‐testing accuracy of 91.53%. These results showed the improved precision by PS‐OCT in guiding epidural anesthesia needle placement.more » « less
- 
            Abstract Cerebral microvascular health is a key biomarker for the study of natural aging and associated neurological diseases. Our aim is to quantify aging‐associated change of microvasculature at diverse dimensions in mice brain. We used optical coherence tomography (OCT) and two‐photon microscopy (TPM) to obtain nonaged and aged C57BL/6J mice cerebral microvascular images in vivo. Our results indicated that artery & vein, arteriole & venule, and capillary from nonaged and aged mice showed significant differences in density, diameter, complexity, perimeter, and tortuosity. OCT angiography and TPM provided the comprehensive quantification for arteriole and venule via compensating the limitation of each modality alone. We further demonstrated that arteriole and venule at specific dimensions exhibited negative correlations in most quantification analyses between nonaged and aged mice, which indicated that TPM and OCT were able to offer complementary vascular information to study the change of cerebral blood vessels in aging.more » « less
- 
            Boudoux, Caroline; Tunnell, James W (Ed.)Free, publicly-accessible full text available March 20, 2026
- 
            Leitgeb, Rainer A; Yasuno, Yoshiaki (Ed.)Free, publicly-accessible full text available March 19, 2026
- 
            The variability and biases in the real-world performance benchmarking of deep learning models for medical imaging compromise their trustworthiness for real-world deployment. The common approach of holding out a single fixed test set fails to quantify the variance in the estimation of test performance metrics. This study introduces NACHOS (Nested and Automated Cross-validation and Hyperparameter Optimization using Supercomputing) to reduce and quantify the variance of test performance metrics of deep learning models. NACHOS integrates Nested Cross-Validation (NCV) and Automated Hyperparameter Optimization (AHPO) within a parallelized high-performance computing (HPC) framework. NACHOS was demonstrated on a chest X-ray repository and an Optical Coherence Tomography (OCT) dataset under multiple data partitioning schemes. Beyond performance estimation, DACHOS (Deployment with Automated Cross-validation and Hyperparameter Optimization using Supercomputing) is introduced to leverage AHPO and cross-validation to build the final model on the full dataset, improving expected deployment performance. The findings underscore the importance of NCV in quantifying and reducing estimation variance, AHPO in optimizing hyperparameters consistently across test folds, and HPC in ensuring computational feasibility. By integrating these methodologies, NACHOS and DACHOS provide a scalable, reproducible, and trustworthy framework for DL model evaluation and deployment in medical imaging.more » « lessFree, publicly-accessible full text available March 11, 2026
- 
            Free, publicly-accessible full text available December 1, 2025
- 
            Applegate, Brian E; Tkaczyk, Tomasz S (Ed.)
- 
            Background: At the time of cancer diagnosis, it is crucial to accurately classify malignant gastric tumors and the possibility that patients will survive. Objective: This study aims to investigate the feasibility of identifying and applying a new feature extraction technique to predict the survival of gastric cancer patients. Methods: A retrospective dataset including the computed tomography (CT) images of 135 patients was assembled. Among them, 68 patients survived longer than three years. Several sets of radiomics features were extracted and were incorporated into a machine learning model, and their classification performance was characterized. To improve the classification performance, we further extracted another 27 texture and roughness parameters with 2484 superficial and spatial features to propose a new feature pool. This new feature set was added into the machine learning model and its performance was analyzed. To determine the best model for our experiment, Random Forest (RF) classifier, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naïve Bayes (NB) (four of the most popular machine learning models) were utilized. The models were trained and tested using the five-fold cross-validation method. Results: Using the area under ROC curve (AUC) as an evaluation index, the model that was generated using the new feature pool yields AUC = 0.98 ± 0.01, which was significantly higher than the models created using the traditional radiomics feature set (p < 0.04). RF classifier performed better than the other machine learning models. Conclusions: This study demonstrated that although radiomics features produced good classification performance, creating new feature sets significantly improved the model performance.more » « less
- 
            Izatt, Joseph A.; Fujimoto, James G. (Ed.)
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
