skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 20, 2026

Title: A TBP DICOM format for total-body scanner-independent lesion evolution detection
Total-body photography (TBP) has the potential to revolutionize early detection of skin cancers by monitoring minute changes in lesions over time. However, there is no standardized Digital Imaging and Communications in Medicine (DICOM) format for TBP. In order to accommodate various TBP data types and sophisticated data preprocessing pipelines, we propose three TBP Extended Information Object Definitions (IODs) for 2D regional images, dermoscopy images, and 3D surface meshes. We introduce a comprehensive pipeline integrating advanced image processing techniques, including 3D DICOM representation, super-resolution enhancement, and style transfer for dermoscopic-like visualization. Our framework tracks individual lesions across multiple TBP scans from different imaging systems and provides cloud-based storage with a customized DICOM viewer. To demonstrate the effectiveness of our approach, we validate our framework using TBP datasets from multiple imaging systems. Our framework and proposed IODs enhance TBP interoperability and clinical utility in dermatological practice, potentially improving early skin cancer detection.  more » « less
Award ID(s):
2335086
PAR ID:
10608771
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
PROCEEDINGS VOLUME 13313 BIOS | 25-31 JANUARY 2025 Biophotonics in Exercise Science, Sports Medicine, Health Monitoring Technologies, and Wearables VI
Date Published:
Subject(s) / Keyword(s):
TBP DICOM Total Body Photography ABCDE
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Real-time execution of machine learning (ML) pipelines on radiology images is difficult due to limited computing resources in clinical environments, whereas running them in research clusters requires efficient data transfer capabilities. We developed Niffler, an open-source Digital Imaging and Communications in Medicine (DICOM) framework that enables ML and processing pipelines in research clusters by efficiently retrieving images from the hospitals’ PACS and extracting the metadata from the images. We deployed Niffler at our institution (Emory Healthcare, the largest healthcare network in the state of Georgia) and retrieved data from 715 scanners spanning 12 sites, up to 350 GB/day continuously in real-time as a DICOM data stream over the past 2 years. We also used Niffler to retrieve images bulk on-demand based on user-provided filters to facilitate several research projects. This paper presents the architecture and three such use cases of Niffler. First, we executed an IVC filter detection and segmentation pipeline on abdominal radiographs in real-time, which was able to classify 989 test images with an accuracy of 96.0%. Second, we applied the Niffler Metadata Extractor to understand the operational efficiency of individual MRI systems based on calculated metrics. We benchmarked the accuracy of the calculated exam time windows by comparing Niffler against the Clinical Data Warehouse (CDW). Niffler accurately identified the scanners’ examination timeframes and idling times, whereas CDW falsely depicted several exam overlaps due to human errors. Third, with metadata extracted from the images by Niffler, we identified scanners with misconfigured time and reconfigured five scanners. Our evaluations highlight how Niffler enables real-time ML and processing pipelines in a research cluster. 
    more » « less
  2. Among the different types of skin cancer, melanoma is considered to be the deadliest and is difficult to treat at advanced stages. Detection of melanoma at earlier stages can lead to reduced mortality rates. Desktop-based computer-aided systems have been developed to assist dermatologists with early diagnosis. However, there is significant interest in developing portable, at-home melanoma diagnostic systems which can assess the risk of cancerous skin lesions. Here, we present a smartphone application that combines image capture capabilities with preprocessing and segmentation to extract the Asymmetry, Border irregularity, Color variegation, and Diameter (ABCD) features of a skin lesion. Using the feature sets, classification of malignancy is achieved through support vector machine classifiers. By using adaptive algorithms in the individual data-processing stages, our approach is made computationally light, user friendly, and reliable in discriminating melanoma cases from benign ones. Images of skin lesions are either captured with the smartphone camera or imported from public datasets. The entire process from image capture to classification runs on an Android smartphone equipped with a detachable 10x lens, and processes an image in less than a second. The overall performance metrics are evaluated on a public database of 200 images with Synthetic Minority Over-sampling Technique (SMOTE) (80% sensitivity, 90% specificity, 88% accuracy, and 0.85 area under curve (AUC)) and without SMOTE (55% sensitivity, 95% specificity, 90% accuracy, and 0.75 AUC). The evaluated performance metrics and computation times are comparable or better than previous methods. This all-inclusive smartphone application is designed to be easy-to-download and easy-to-navigate for the end user, which is imperative for the eventual democratization of such medical diagnostic systems. 
    more » « less
  3. Positive outcomes for colorectal cancer treatment have been linked to early detection. The difficulty in detecting early lesions is the limited contrast with surrounding mucosa and minimal definitive markers to distinguish between hyperplastic and carcinoma lesions. Colorectal cancer is the 3rd leading cancer for incidence and mortality rates which is potentially linked to missed early lesions which allow for increased growth and metastatic potential. One potential technology for early-stage lesion detection is hyperspectral imaging. Traditionally, hyperspectral imaging uses reflectance spectroscopic data to provide a component analysis, per pixel, of an image in fields such as remote sensing, agriculture, food processing and archaeology. This work aims to acquire higher signal-to-noise fluorescence spectroscopic data, harnessing the autofluorescence of tissue, adding a hyperspectral contrast to colorectal cancer detection while maintaining spatial resolution at video-rate speeds. We have previously designed a multi-furcated LED-based spectral light source to prove this concept. Our results demonstrated that the technique is feasible, but the initial prototype has a high light transmission loss (~98%) minimizing spatial resolution and slowing video acquisition. Here, we present updated results in developing an optical ray-tracing model of light source geometries to maximize irradiance throughput for excitation-scanning hyperspectral imaging. Results show combining solid light guide branches have a compounding light loss effect, however, there is potential to minimize light loss through the use of optical claddings. This simulation data will provide the necessary metrics to verify and validate future physical optical components within the hyperspectral endoscopic system for detecting colorectal cancer. 
    more » « less
  4. Machine learning (ML) based skin cancer detection tools are an example of a transformative medical technology that could potentially democratize early detection for skin cancer cases for everyone. However, due to the dependency of datasets for training, ML based skin cancer detection always suffers from a systemic racial bias. Racial communities and ethnicity not well represented within the training datasets will not be able to use these tools, leading to health disparities being amplified. Based on empirical observations we posit that skin cancer training data is biased as it’s dataset represents mostly communities of lighter skin tones, despite skin cancer being far more lethal for people of color. In this paper we use domain adaptation techniques by employing CycleGANs to mitigate racial biases existing within state of the art machine learning based skin cancer detection tools by adapting minority images to appear as the majority. Using our domain adaptation techniques to augment our minority datasets, we are able to improve the accuracy, precision, recall, and F1 score of typical image classification machine learning models for skin cancer classification from the biased 50% accuracy rate to a 79% accuracy rate when testing on minority skin tone images. We evaluate and demonstrate a proof-of-concept smartphone application. 
    more » « less
  5. Reflectance confocal microscopy (RCM) is a noninvasive optical imaging modality that allows for cellular-level resolution, in vivo images of skin without performing a traditional skin biopsy. RCM image interpretation currently requires specialized training to interpret the grayscale output images that are difficult to correlate with tissue pathology. Here, we use a deep learning-based framework that uses a convolutional neural network to transform grayscale output images into virtually-stained hematoxylin and eosin (H&E)-like images allowing for the visualization of various skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. To train the deep-learning framework, a stack of a minimum of 7 time-lapsed, successive RCM images of excised tissue were obtained from epidermis to dermis 1.52 microns apart to a depth of 60.96 microns using the Vivascope 3000. The tissue was embedded in agarose tissue and a curette was used to create a tunnel through which drops of 50% acetic acid was used to stain cell nuclei. These acetic acid-stained images were used as “ground truth” to train a deep convolutional neural network using a conditional generative adversarial network (GAN)-based machine learning algorithm to digitally convert the images into GAN-based H&E-stained digital images. We used the already trained machine learning algorithm and retrained the algorithm with new samples to include squamous neoplasms. Through further training and refinement of the algorithm, high-resolution, histological quality images can be obtained to aid in earlier diagnosis and treatment of cutaneous neoplasms. The overall goal of obtaining biopsy-free virtual histology images with this technology can be used to provide real-time outputs of virtually-stained H&E skin lesions, thus decreasing the need for invasive diagnostic procedures and enabling greater uptake of the technology by the medical community. 
    more » « less