skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep learning classification of cervical dysplasia using depth-resolved angular light scattering profiles
We present a machine learning method for detecting and staging cervical dysplastic tissue using light scattering data based on a convolutional neural network (CNN) architecture. Depth-resolved angular scattering measurements from two clinical trials were used to generate independent training and validation sets as input of our model. We report 90.3% sensitivity, 85.7% specificity, and 87.5% accuracy in classifying cervical dysplasia, showing the uniformity of classification of a/LCI scans across different instruments. Further, our deep learning approach significantly improved processing speeds over the traditional Mie theory inverse light scattering analysis (ILSA) method, with a hundredfold reduction in processing time, offering a promising approach for a/LCI in the clinic for assessing cervical dysplasia.  more » « less
Award ID(s):
2009841
PAR ID:
10276910
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
12
Issue:
8
ISSN:
2156-7085
Format(s):
Medium: X Size: Article No. 4997
Size(s):
Article No. 4997
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Digital light processing (DLP)-based three-dimensional (3D) printing technology has the advantages of speed and precision comparing with other 3D printing technologies like extrusion-based 3D printing. Therefore, it is a promising biomaterial fabrication technique for tissue engineering and regenerative medicine. When printing cell-laden biomaterials, one challenge of DLP-based bioprinting is the light scattering effect of the cells in the bioink, and therefore induce unpredictable effects on the photopolymerization process. In consequence, the DLP-based bioprinting requires extra trial-and-error efforts for parameters optimization for each specific printable structure to compensate the scattering effects induced by cells, which is often difficult and time-consuming for a machine operator. Such trial-and-error style optimization for each different structure is also very wasteful for those expensive biomaterials and cell lines. Here, we use machine learning to learn from a few trial sample printings and automatically provide printer the optimal parameters to compensate the cell-induced scattering effects. We employ a deep learning method with a learning-based data augmentation which only requires a small amount of training data. After learning from the data, the algorithm can automatically generate the printer parameters to compensate the scattering effects. Our method shows strong improvement in the intra-layer printing resolution for bioprinting, which can be further extended to solve the light scattering problems in multilayer 3D bioprinting processes. 
    more » « less
  2. Abstract We develop a decentralized colouring approach to diversify the nodes in a complex network. The key is the introduction of a local conflict index (LCI) that measures the colour conflicts arising at each node which can be efficiently computed using only local information. We demonstrate via both synthetic and real-world networks that the proposed approach significantly outperforms random colouring as measured by the size of the largest colour-induced connected component. Interestingly, for scale-free networks further improvement of diversity can be achieved by tuning a degree-biasing weighting parameter in the LCI. 
    more » « less
  3. Abstract We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co‐axial projector‐camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes. 
    more » « less
  4. Coded aperture imaging has emerged as a solution to enhance light sensitivity and enable imaging in challenging conditions. However, the computational expense of image reconstruction poses limitations in processing efficiency. To address this, we propose a direct classification method using convolutional neural networks. By leveraging raw coded measurements, our approach eliminates the need for explicit image reconstruction, reducing computational overhead. We evaluate the effectiveness of this approach compared to traditional methods on the MNIST and CIFAR10 datasets. Our results demonstrate that direct image classification using raw coded measurements achieves comparable performance to traditional methods while reducing computational overhead and enabling real-time processing. These findings highlight the potential of machine learning in enhancing the decoding process and improving the overall performance of coded aperture imaging systems. 
    more » « less
  5. Previously, the analysis of atomic force microscopy (AFM) images allowed us to distinguish normal from cancerous/precancerous human epithelial cervical cells using only the fractal dimension parameter. High-resolution maps of adhesion between the AFM probe and the cell surface were used in that study. However, the separation of cancerous and precancerous cells was rather poor (the area under the curve (AUC) was only 0.79, whereas the accuracy, sensitivity, and specificity were 74%, 58%, and 84%, respectively). At the same time, the separation between premalignant and malignant cells is the most significant from a clinical point of view. Here, we show that the introduction of machine learning methods for the analysis of adhesion maps allows us to distinguish precancerous and cancerous cervical cells with rather good precision (AUC, accuracy, sensitivity, and specificity are 0.93, 83%, 92%, and 78%, respectively). Substantial improvement in sensitivity is significant because of the unmet need in clinical practice to improve the screening of cervical cancer (a relatively low specificity can be compensated by combining this approach with other currently existing screening methods). The random forest decision tree algorithm was utilized in this study. The analysis was carried out using the data of six precancerous primary cell lines and six cancerous primary cell lines, each derived from different humans. The robustness of the classification was verified using K-fold cross-validation (K = 500). The results are statistically significant at p < 0.0001. Statistical significance was determined using the random shuffle method as a control. 
    more » « less