skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Convolutional Network Analysis of Optical Micrographs for Liquid Crystal Sensors
We provide an in-depth convolutional neural network (CNN) analysis of optical responses of liquid crystals (LCs) when exposed to different chemical environments. Our aim is to identify informative features that can be used to construct automated LC-based chemical sensors and shed some light on the underlying phenomenon that governs and distinguishes LC responses. Previous work demonstrated that, by using features extracted from AlexNet, grayscale micrographs of different LC responses can be classified with an accuracy of 99%. Reaching such high levels of accuracy, however, required the use of a large number of features (on the order of thousands), which was computationally intensive and clouded the physical interpretability of the dominant features. To address these issues, here we report a study on the effectiveness of using features extracted from color micrographs using VGG16, which is a more compact CNN than Alexnet. Our analysis reveals that features extracted from the first and second convolutional layers of VGG16 are sufficient to achieve a perfect classification accuracy while reducing the number of features to less than 100. The number of features is further reduced to 10 via recursive elimination with a minimal loss in classification accuracy (5–10%). This reduction procedure reveals that differences in spatial color patterns are developed within seconds in the LC response. From this, we conclude that hue distributions provide an informative set of features that can be used to characterize LC sensor responses. We also hypothesize that differences in the spatial correlation length of LC textures detected by VGG16 with DMMP and water likely reflect differences in the anchoring energy of the LC on the surface of the sensor. Our results hint at fresh approaches for the design of LC-based sensors based on the characterization of spontaneous fluctuations in the orientation (as opposed to changes in time-average orientations reported in the literature).  more » « less
Award ID(s):
1837812 1837821 1720415
PAR ID:
10188702
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of physical chemistry
Volume:
124
ISSN:
1932-7455
Page Range / eLocation ID:
15152-15161
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We report how analysis of the spatial and temporal optical responses of liquid crystal (LC) films to targeted gases, when per-formed using a machine learning methodology, can advance the sensing of gas mixtures and provide important insights into the physical processes that underlie the sensor response. We develop the methodology using O3 and Cl2 mixtures (representative of an important class of analytes) and LCs supported on metal perchlorate-decorated surfaces as a model system. Whereas O3 and Cl2¬ both diffuse through LC films and undergo redox reactions with the supporting metal perchlorate surfaces to generate similar ini-tial and final optical states of the LCs, we show that a 3-dimensional convolutional neural network (3D CNN) can extract feature information that is encoded in the spatiotemporal color patterns of the LCs to detect the presence of both O3 and Cl2 species in mixtures as well as to quantify their concentrations. Our analysis reveals that O3 detection is driven by the transition time over which the brightness of the LC changes, while Cl2 detection is driven by color fluctuations that develop late in the optical response of the LC. We also show that we can detect the presence of Cl2 even when the concentration of O3 is orders of magnitude greater than the Cl2 concentration. The proposed methodology is generalizable to a wide range of analytes, reactive surfaces and LCs, and has the potential to advance the design of portable LC monitoring devices (e.g., wearable devices) for analyzing gas mixtures us-ing spatiotemporal color fluctuations. 
    more » « less
  2. Abstract Interpreting time domain reflectometry (TDR) waveforms obtained in soils with non‐uniform water content is an open question. We design a new TDR waveform interpretation model based on convolutional neural networks (CNNs) that can reveal the spatial variations of soil relative permittivity and water content along a TDR sensor. The proposed model, namely TDR‐CNN, is constructed with three modules. First, the geometrical features of the TDR waveforms are extracted with a simplified version of VGG16 network. Second, the reflection positions in a TDR waveform are traced using a 1D version of the region proposal network. Finally, the soil relative permittivity values are estimated via a CNN regression network. The three modules are developed in Python using Google TensorFlow and Keras API, and then stacked together to formulate the TDR‐CNN architecture. Each module is trained separately, and data transfer among the modules can be facilitated automatically. TDR‐CNN is evaluated using simulated TDR waveforms with varying relative permittivity but under a relatively stable soil electrical conductivity, and the accuracy and stability of the TDR‐CNN are shown. TDR measurements from a water infiltration study provide an application for TDR‐CNN and a comparison between TDR‐CNN and an inverse model. The proposed TDR‐CNN model is simple to implement, and modules in TDR‐CNN can be updated or fine‐tuned individually with new data sets. In conclusion, TDR‐CNN presents a model architecture that can be used to interpret TDR waveforms obtained in soil with a heterogeneous water content distribution. 
    more » « less
  3. With Convolutional Neural Networks (CNN) becoming more of a commodity in the computer vision field, many have attempted to improve CNN in a bid to achieve better accuracy to a point that CNN accuracies have surpassed that of human's capabilities. However, with deeper networks, the number of computations and consequently the power needed per classification has grown considerably. In this paper, we propose Iterative CNN (ICNN) by reformulating the CNN from a single feed-forward network to a series of sequentially executed smaller networks. Each smaller network processes a sub-sample of input image, and features extracted from previous network, and enhances the classification accuracy. Upon reaching an acceptable classification confidence, ICNN immediately terminates. The proposed network architecture allows the CNN function to be dynamically approximated by creating the possibility of early termination and performing the classification with far fewer operations compared to a conventional CNN. Our results show that this iterative approach competes with the original larger networks in terms of accuracy while incurring far less computational complexity by detecting many images in early iterations. 
    more » « less
  4. In this paper, we propose a deep multimodal fusion network to fuse multiple modalities (face, iris, and fingerprint) for person identification. The proposed deep multimodal fusion algorithm consists of multiple streams of modality-specific Convolutional Neural Networks (CNNs), which are jointly optimized at multiple feature abstraction levels. Multiple features are extracted at several different convolutional layers from each modality-specific CNN for joint feature fusion, optimization, and classification. Features extracted at different convolutional layers of a modality-specific CNN represent the input at several different levels of abstract representations. We demonstrate that an efficient multimodal classification can be accomplished with a significant reduction in the number of network parameters by exploiting these multi-level abstract representations extracted from all the modality-specific CNNs. We demonstrate an increase in multimodal person identification performance by utilizing the proposed multi-level feature abstract representations in our multimodal fusion, rather than using only the features from the last layer of each modality-specific CNNs. We show that our deep multi-modal CNNs with multimodal fusion at several different feature level abstraction can significantly outperform the unimodal representation accuracy. We also demonstrate that the joint optimization of all the modality-specific CNNs excels the score and decision level fusions of independently optimized CNNs. 
    more » « less
  5. Predicting materials’ microstructure from the desired properties is critical for exploring new materials. Herein, a novel regression‐based prediction of scanning electron microscopy (SEM) images for the target hardness using generative adversarial networks (GANs) is demonstrated. This article aims at generating realistic SEM micrographs, which contain rich features (e.g., grain and neck shapes, tortuosity, spatial configurations of grain/pores). Together, these features affect material properties but are difficult to predict. A high‐performance GAN, named ‘Microstructure‐GAN’ (or M‐GAN), with residual blocks to significantly improve the details of synthesized micrographs is established . This algorithm was trained with experimentally obtained SEM micrographs of laser‐sintered alumina. After training, the high‐fidelity, feature‐rich micrographs can be predicted for an arbitrary target hardness. Microstructure details such as small pores and grain boundaries can be observed even at the nanometer scale (∼50 nm) in the predicted 1000× micrographs. A pretrained convolutional neural network (CNN) was used to evaluate the accuracy of the predicted micrographs with rich features for specific hardness. The relative bias of the CNN‐evaluated value of the generated micrographs was within 2.1%–2.7% from the values for experimental micrographs. This approach can potentially be applied to other microscopy data, such as atomic force, optical, and transmission electron microscopy. 
    more » « less