skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 6, 2026

Title: Evaluation of quantum contouring algorithms for treatment planning on MR abdominal images
IntroductionQuantum computing is increasingly being investigated for integration into medical radiology and healthcare applications worldwide. Given its potential to enhance clinical care and medical research, there is growing interest in evaluating its practical applications in clinical workflows. MethodsWe developed an evaluation of quantum computing-based auto-contouring methods to introduce medical physicists to this emerging technology. We implemented existing quantum algorithms as prototypes tailored for specific quantum hardware, focusing on their application to auto-contouring in medical imaging. The evaluation was performed using a medical resonance imaging (MRI) abdominal dataset, comprising 102 patient scans. ResultsThe quantum algorithms were applied to the dataset and assessed for their potential in auto-contouring tasks. One of the quantum-based auto contouring methods demonstrated conceptual feasibility, practical performance is still limited by current available quantum hardware and scalability constraints. DiscussionOur findings suggest that while quantum computing for auto-contouring shows promise, it remains in its early stages. At present, artificial intelligence-based algorithms continue to be the preferred choice for auto-contouring in treatment planning due to their greater efficiency and accuracy. As quantum hardware and algorithms mature, their integration into clinical workflows may become more viable.  more » « less
Award ID(s):
2111147
PAR ID:
10644183
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Frontiers
Date Published:
Journal Name:
Frontiers in Oncology
Volume:
15
ISSN:
2234-943X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Image quality assessment (IQA) is indispensable in clinical practice to ensure high standards, as well as in the development stage of machine learning algorithms that operate on medical images. The popular full reference (FR) IQA measures PSNR and SSIM are known and tested for working successfully in many natural imaging tasks, but discrepancies in medical scenarios have been reported in the literature, highlighting the gap between development and actual clinical application. Such inconsistencies are not surprising, as medical images have very different properties than natural images, and PSNR and SSIM have neither been targeted nor properly tested for medical images. This may cause unforeseen problems in clinical applications due to wrong judgement of novel methods. This paper provides a structured and comprehensive overview of examples where PSNR and SSIM prove to be unsuitable for the assessment of novel algorithms using different kinds of medical images, including real-world MRI, CT, OCT, X-Ray, digital pathology and photoacoustic imaging data. Therefore, improvement is urgently needed in particular in this era of AI to increase reliability and explainability in machine learning for medical imaging and beyond. Lastly, we will provide ideas for future research as well as suggest guidelines for the usage of FR-IQA measures applied to medical images. 
    more » « less
  2. Artificial intelligence algorithms are being adopted to analyze medical data, promising faster interpretation to support doctors’ diagnostics. The next frontier is to bring these powerful algorithms to implantable medical devices. Herein, a closed‐loop solution is proposed, where a cellular neural network is used to detect abnormal wavefronts and wavebrakes in cardiac signals recorded in human tissue is trained to achieve >96% accuracy, >92% precision, >99% specificity, and >93% sensitivity, when floating point precision weights are assumed. Unfortunately, the current hardware technologies for floating point precision are too bulky or energy intensive for compact standalone applications in medical implants. Emerging device technologies, such as memristors, can provide the compact and energy‐efficient hardware fabric to support these efforts and can be reliably embedded with existing sensor and actuator platforms in implantable devices. A distributed design that considers the hardware limitations in terms of overhead and limited bit precision is also discussed. The proposed distributed solution can be easily adapted to other medical technologies that require compact and efficient computing, like wearable devices and lab‐on‐chip platforms. 
    more » « less
  3. Apptainer (Formerly known as Singularity) is a secure, portable, and easy-to-use container system that provides absolute trust and security. It is widely used across industry and academia and suitable for filling the gaps in integration between running applications on new software technologies and legacy hardware using the optimized resource utilization of CPU and memory. It runs complex applications on HPC clusters in a simple, reproducible way. In this paper we are discussing about various implementations of Artificial Intelligence and Machine learning container-based applications running on Pegasus Supercomputing Nodes using Singularity, Nextflow. It reduces configuration setup work manually by singularity applications and it increases current workflows of High-Performance Computing (HPC), High Throughput Computing (HTC) and run time performance by 3X. we also incorporated comparative based evaluation analytical results of running an application through normal LSF job with singularity container CPU, GPU utilization and its tradeoffs. 
    more » « less
  4. Abstract BackgroundTo address the limitations of large-scale high quality microscopy image acquisition, PSSR (Point-Scanning Super-Resolution) was introduced to enhance easily acquired low quality microscopy data to a higher quality using deep learning-based methods. However, while PSSR was released as open-source, it was difficult for users to implement into their workflows due to an outdated codebase, limiting its usage by prospective users. Additionally, while the data enhancements provided by PSSR were significant, there was still potential for further improvement. MethodsTo overcome this, we introduce PSSR2, a redesigned implementation of PSSR workflows and methods built to put state-of-the-art technology into the hands of the general microscopy and biology research community. PSSR2 enables user-friendly implementation of super-resolution workflows for simultaneous super-resolution and denoising of undersampled microscopy data, especially through its integrated Command Line Interface and Napari plugin. PSSR2 improves and expands upon previously established PSSR algorithms, mainly through improvements in the semi-synthetic data generation (“crappification”) and training processes. ResultsIn benchmarking PSSR2 on a test dataset of paired high and low resolution electron microscopy images, PSSR2 super-resolves high-resolution images from low-resolution images to a significantly higher accuracy than PSSR. The super-resolved images are also more visually representative of real-world high-resolution images. DiscussionThe improvements in PSSR2, in providing higher quality images, should improve the performance of downstream analyses. We note that for accurate super-resolution, PSSR2 models should only be applied to super-resolve data sufficiently similar to training data and should be validated against real-world ground truth data. 
    more » « less
  5. ImportanceLarge language models (LLMs) can assist in various health care activities, but current evaluation approaches may not adequately identify the most useful application areas. ObjectiveTo summarize existing evaluations of LLMs in health care in terms of 5 components: (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty. Data SourcesA systematic search of PubMed and Web of Science was performed for studies published between January 1, 2022, and February 19, 2024. Study SelectionStudies evaluating 1 or more LLMs in health care. Data Extraction and SynthesisThree independent reviewers categorized studies via keyword searches based on the data used, the health care tasks, the NLP and NLU tasks, the dimensions of evaluation, and the medical specialty. ResultsOf 519 studies reviewed, published between January 1, 2022, and February 19, 2024, only 5% used real patient care data for LLM evaluation. The most common health care tasks were assessing medical knowledge such as answering medical licensing examination questions (44.5%) and making diagnoses (19.5%). Administrative tasks such as assigning billing codes (0.2%) and writing prescriptions (0.2%) were less studied. For NLP and NLU tasks, most studies focused on question answering (84.2%), while tasks such as summarization (8.9%) and conversational dialogue (3.3%) were infrequent. Almost all studies (95.4%) used accuracy as the primary dimension of evaluation; fairness, bias, and toxicity (15.8%), deployment considerations (4.6%), and calibration and uncertainty (1.2%) were infrequently measured. Finally, in terms of medical specialty area, most studies were in generic health care applications (25.6%), internal medicine (16.4%), surgery (11.4%), and ophthalmology (6.9%), with nuclear medicine (0.6%), physical medicine (0.4%), and medical genetics (0.2%) being the least represented. Conclusions and RelevanceExisting evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data. Dimensions such as fairness, bias, and toxicity and deployment considerations received limited attention. Future evaluations should adopt standardized applications and metrics, use clinical data, and broaden focus to include a wider range of tasks and specialties. 
    more » « less