skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1750970

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Volpe, Giovanni; Pereira, Joana B; Brunner, Daniel; Ozcan, Aydogan (Ed.)
    Measuring Hemoglobin (Hb) levels is required for the assessment of different health conditions, such as anemia, a condition where there are insufficient healthy red blood cells to carry enough oxygen to the body's tissues. Measuring Hb levels requires the extraction of a blood sample, which is then sent to a laboratory for analysis. This is an invasive procedure that may add challenges to the continuous monitoring of Hb levels. Noninvasive techniques, including imaging and photoplethysmography (PPG) signals combined with machine learning techniques, are being investigated for continuous measurements of Hb. However, the availability of real data to train the algorithms is limited to establishing a generalization and implementation of such techniques in healthcare settings. In this work, we present a computational model based on Monte Carlo simulations that can generate multispectral PPG signals that cover a broad range of Hb levels. These signals are then used to train a Deep Learning (DL) model to estimate hemoglobin levels. Through this approach, valuable insights about the relationships between PPG signals, oxygen saturation, and Hb levels are learned by the DL model. The signals were generated by propagating a source in a volume that contains the skin tissue properties and the target physiological parameters. The source consisted of plane waves using the 660 nm and 890 nm wavelengths. A range of 6 g/dL to 18 dL Hb values was used to generate 468 PPGs to train a Convolutional Neural Network (CNN). The initial results show high accuracy in detecting low levels of Hb. To the best of our knowledge, the complexity of biological interactions involved in measuring hemoglobin levels has yet to be fully modeled. The presented model offers an alternative approach to studying the effects of changes in Hb levels on the PPGs signal morphology and its interaction with other physiological parameters that are present in the optical path of the measured signals. 
    more » « less
  2. The Finite-Difference Time-Domain (FDTD) method is a numerical modeling technique used by researchers as one of the most accurate methods to simulate the propagation of an electromagnetic wave through an object over time. Due to the nature of the method, FDTD can be computationally expensive when used in complex setting such as light propagation in highly heterogenous object such as the imaging process of tissues. In this paper, we explore a Deep Learning (DL) model that predicts the evolution of an electromagnetic field in a heterogeneous medium. In particular, modeling for propagation of a Gaussian beam in skin tissue layers. This is relevant for the characterization of microscopy imaging of tissues. Our proposed model named FDTD-net, is based on the U-net architecture, seems to perform the prediction of the electric field (EF) with good accuracy and faster when compared to the FDTD method. A dataset of different geometries was created to simulate the propagation of the electric field. The propagation of the electric field was initially generated using the traditional FDTD method. This data set was used for training and testing of the FDTD-net. The experiments show that the FDTD-net learns the physics related to the propagation of the source in the heterogeneous objects, and it can capture changes in the field due to changes in the object morphology. As a result, we present a DL model that can compute a propagated electric field in less time than the traditional method. 
    more » « less
  3. Sepsis is a severe medical illness with over 1.7 million cases reported each year in the United States. Early diagnosis of sepsis is cr- tical to adress adecuate tre remains a major challenge in healthcare due to the nonspecificity of the initial symptoms and the lack of currently available biomarkers that demonstrate sufficient specificity or sensitiv- ity suitable for clinical practice. Wearable optical technologies, such as photoplethysmography (PPG), whic uses optical technology to measure changes in blood volume in peripheral tissues, enabling continuous mon- itoring. Identifying modest physiological changes that indicate sepsis can be challenging since they occur without a body reaction. Deep Learning (DL) models can help overcome the diagnostic gap in sepsis diagnosis and intervention. This study analyzes sepsis-related characteristics in PPG signals utilizing a collection of waveform records from both sepsis and control cases. The proposed model consists of five layers: input sequence, long short-term memory (LSTM), fully-connected, softmax, and classi- fication. The LSTM layer is chosen to extract and filter features from cycles of PPG signals; then, the features pass through a fully-connected layer to be classified. We tested our LSTM-based model on 915 one- second intervals to identify and classify sepsis severity. Our LSTM-based model accurately detected sepsis (91.30% for training and 89.74% for testing). The sepsis severity categorization achieved an accuracy of 85.9% in training and 81.4% in testing. Multiple training attempts were con- ducted to validate the model’s detecting capabilities. Preliminary results show that a deep learning model utilizing an LSTM layer can detect and categorize sepsis using PPG data, potentially allowing for real-time diagnosis and monitoring within a single cycle. 
    more » « less
  4. ifferent mechanisms are used for the discovery of materials. These include creating a material by trial-and-error process without knowing its properties. Other methods are based on computational simulations or mathematical and statistical approaches, such as Density Functional Theory (DFT). A well-known strategy combines elements to predict their properties and selects a set of those with the properties of interest. Carrying out exhaustive calculations to predict the properties of these found compounds may require a high computational cost. Therefore, there is a need to create methods for identifying materials with a desired set of properties while reducing the search space and, consequently, the computational cost. In this work, we present a genetic algorithm that can find a higher percentage of compounds with specific properties than state-of-the-art methods, such as those based on combinatorial screening. Both methods are compared in the search for ternary compounds in an unconstrained space, using a Deep Neural Network (DNN) to predict properties such as formation enthalpy, band gap, and stability; we will focus on formation enthalpy. As a result, we provide a genetic algorithm capable of finding up to 60% more compounds with atypical values of properties, using DNNs for their prediction. 
    more » « less
  5. Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model. 
    more » « less
  6. Properties in material composition and crystal structures have been explored by density functional theory (DFT) calculations, using databases such as the Open Quantum Materials Database (OQMD). Databases like these have been used currently for the training of advanced machine learning and deep neural network models, the latter providing higher performance when predicting properties of materials. However, current alternatives have shown a deterioration in accuracy when increasing the number of layers in their architecture (over-fitting problem). As an alternative method to address this problem, we have implemented residual neural network architectures based on Merge and Run Networks, IRNet and UNet to improve performance while relaxing the observed network depth limitation. The evaluation of the proposed architectures include a 9:1 ratio to train and test as well as 10 fold cross validation. In the experiments we found that our proposed architectures based on IRNet and UNet are able to obtain a lower Mean Absolute Error (MAE) than current strategies. The full implementation (Python, Tensorflow and Keras) and the trained networks will be available online for community validation and advancing the state of the art from our findings. 
    more » « less
  7. Messinger, David W.; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less
  8. Messinger, David W.; Velez-Reyes, Miguel (Ed.)
    Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets. 
    more » « less
  9. he Compressive Sensing (CS) framework has demonstrated improved acquisition efficiency on a variety of clinical applications. Of interest to this work is Reflectance Confocal Microscopy (RCM), where CS can influence a drastic reduction in instrumentation complexity and image acquisition times. However, CS introduces the disadvantage of requiring a time consuming and computationally intensive process for image recovery. To mitigate this, the current document details our preliminary work on expanding a Deep-Learning architecture for the acquisition and fast recovery of RCM images using CS. We show preliminary recoveries of RCM images of both a synthetic target and heterogeneous skin tissue using a state-of-the-art network architecture from compressive measurements at various undersampling rates. In addition, we propose an application-specific addition to an established network architecture, and evaluate its ability to further increase the accuracy of recovered CS RCM images and remove visual artifacts. Our initial results show that it is possible to recover compressively sampled images at near-real time rates with comparable quality to established computationally intensive and time-consuming optimization-based methods common in CS applications 
    more » « less
  10. Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio. 
    more » « less