skip to main content


Title: Time Domain Reflectometry Waveform Interpretation With Convolutional Neural Networks
Abstract

Interpreting time domain reflectometry (TDR) waveforms obtained in soils with non‐uniform water content is an open question. We design a new TDR waveform interpretation model based on convolutional neural networks (CNNs) that can reveal the spatial variations of soil relative permittivity and water content along a TDR sensor. The proposed model, namely TDR‐CNN, is constructed with three modules. First, the geometrical features of the TDR waveforms are extracted with a simplified version of VGG16 network. Second, the reflection positions in a TDR waveform are traced using a 1D version of the region proposal network. Finally, the soil relative permittivity values are estimated via a CNN regression network. The three modules are developed in Python using Google TensorFlow and Keras API, and then stacked together to formulate the TDR‐CNN architecture. Each module is trained separately, and data transfer among the modules can be facilitated automatically. TDR‐CNN is evaluated using simulated TDR waveforms with varying relative permittivity but under a relatively stable soil electrical conductivity, and the accuracy and stability of the TDR‐CNN are shown. TDR measurements from a water infiltration study provide an application for TDR‐CNN and a comparison between TDR‐CNN and an inverse model. The proposed TDR‐CNN model is simple to implement, and modules in TDR‐CNN can be updated or fine‐tuned individually with new data sets. In conclusion, TDR‐CNN presents a model architecture that can be used to interpret TDR waveforms obtained in soil with a heterogeneous water content distribution.

 
more » « less
Award ID(s):
2037504
NSF-PAR ID:
10397011
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Water Resources Research
Volume:
59
Issue:
2
ISSN:
0043-1397
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We explore the potential of the adjoint‐state tsunami inversion method for rapid and accurate near‐field tsunami source characterization using S‐net, an array of ocean bottom pressure gauges. Compared to earthquake‐based methods, this method can obtain more accurate predictions for the initial water elevation of the tsunami source, including potential secondary sources, leading to accurate water height and wave run‐up predictions. Unlike finite‐fault tsunami source inversions, the adjoint method achieves high‐resolution results without requiring densely gridded Green's functions, reducing computation time. However, optimal results require a dense instrument network with sufficient azimuthal coverage. S‐net meets these requirements and reduces data collection time, facilitating the inversion and timely issuance of tsunami warnings. Since the method has not yet been applied to dense, near‐field data, we test it on synthetic waveforms of the 2011Mw9.0 Tohoku earthquake and tsunami, including triggered secondary sources. The results indicate that with a static source model without noise, using the first 5 min of the waveforms yields a favorable performance with an average accuracy score of 93%, and the largest error of predicted wave amplitudes ranges between −5.6 and 1.9 m. Using the first 20 min, secondary sources were clearly resolved. We also demonstrate the method's applicability using S‐net recordings of the 2016Mw6.9 Fukushima earthquake. The findings suggest that lower‐magnitude events require a longer waveform duration for accurate adjoint inversion. Moreover, the estimated stress drop obtained from inverting our obtained tsunami source, assuming uniform slip, aligns with estimations from recent studies.

     
    more » « less
  2. null (Ed.)
    Time-domain reflectometry (TDR) can monitor the moisture content (MC) of water saturated logs stored in wet-decks where the MC exceeds the range that can be measured using traditional moisture meters (>50%). For this application to become routine, it is required that TDR monitoring of wet-decks occurs after establishment, and tools are needed that automate data collection and analysis. We developed models that predict wood MC using three-rod epoxy encased TDR probes inserted into the transverse surface of bolts (prior wet-deck studies were installed on the tangential surface). Models were developed for southern pine, sweetgum, yellow poplar, hickory, red oak, and white oak using a Campbell Scientific TDR100. For each species, at least 37 bolts were soaked for a minimum of three months and then air dried with TDR waveforms, and MC was periodically recorded. Calibrations were developed between MC and the TDR signal using nonlinear mixed effects models. Fixed effects ranged from excellent (southern pine R2 = 0.93) to poor (red oak R2 = 0.36, hickory R2 = 0.38). Independent of wood species, random effects all had a R2 greater than 0.80, which indicates that TDR detects changes in MC at the individual sample level. Use of TDR combined with a datalogger was demonstrated in an operational wet-deck that monitored changes in MC over 12 months, and in a laboratory trial where bolts were exposed to successive wet-dry cycles over 400 days. Both applications demonstrated the utility of TDR to monitor changes in wood MC in high MC environments where periodic measurement is not feasible due to operational safety concerns. Because a saturated TDR reading indicates a saturated MC, and because of the relatively accurate random effects found here, developing individual species models is not necessary for monitoring purposes. Therefore, application of TDR monitoring can be broadly applied for wet-decks, regardless of the species stored. 
    more » « less
  3. Passive Remote Sensing services are indispensable in modern society because of the applications related to climate studies and earth science. Among those, NASA’s Soil Moisture Active and Passive (SMAP) mission provides an essential climate variable such as the moisture content of the soil by using microwave radiation within protected band over 1400-1427 MHz. However, because of the increasing active wireless technologies such as Internet of Things (IoT), unmanned aerial vehicles (UAV), and 5G wireless communication, the SMAP’s passive observations are expected to experience an increasing number of Radio Frequency Interference (RFI). RFI is a well-documented issue and SMAP has a ground processing unit dedicated to tackling this issue. However, advanced techniques are needed to tackle the increasing RFI problem for passive sensing systems and to jointly coexist communication and sensing systems. In this paper, we apply a deep learning approach where a novel Convolutional Neural Network (CNN) architecture for both RFI detection and mitigation is employed. SMAP Level 1A spectrogram of antenna counts and various moments data are used as the inputs to the deep learning architecture. We simulate different types of RFI sources such as pulsed, CW or wideband anthropogenic signals. We then use artificially corrupted SMAP Level 1B antenna measurements in conjunction with RFI labels to train the learning architecture. While the learned detection network classifies input spectrograms as RFI or no-RFI cases, the mitigation network reconstructs the RFI mitigated antenna temperature images. The proposed learning framework both takes advantage of the existing SMAP data and the simulated RFI scenarios. Future remote sensing systems such as radiometers will suffer an increasing RFI problem and spectrum sharing and techniques that will allow coexistance of sensing and communication systems will be utmost importance for both parties. RFI detection and mitigation will remain a prerequisite for these radiometers and the proposed deep learning approach has the potential to provide an additional perspective to existing solutions. We will present detailed analysis on the selected deep learning architecture, obtained RFI detection accuracy levels and RFI mitigation performance. 
    more » « less
  4. SUMMARY

    Non-invasive subsurface imaging using full waveform inversion (FWI) has the potential to fundamentally change near-surface (<30 m) site characterization by enabling the recovery of high-resolution (metre-scale) 2-D/3-D maps of subsurface elastic material properties. Yet, FWI results are quite sensitive to their starting model due to their dependence on local-search optimization techniques and inversion non-uniqueness. Starting model dependence is particularly problematic for near-surface FWI due to the complexity of the recorded seismic wavefield (e.g. dominant surface waves intermixed with body waves) and the potential for significant spatial variability over short distances. In response, convolutional neural networks (CNNs) are investigated as a potential tool for developing starting models for near-surface 2-D elastic FWI. Specifically, 100 000 subsurface models were generated to be representative of a classic near-surface geophysics problem; namely, imaging a two-layer, undulating, soil-over-bedrock interface. A CNN has been developed from these synthetic models that is capable of transforming an experimental wavefield acquired using a seismic source located at the centre of a linear array of 24 closely spaced surface sensors directly into a robust starting model for FWI. The CNN approach was able to produce 2-D starting models with seismic image misfits that were significantly less than the misfits from other common starting model approaches, and in many cases even less than the misfits obtained by FWI with inferior starting models. The ability of the CNN to generalize outside its two-layered training set was assessed using a more complex, three-layered, soil-over-bedrock formation. While the predictive ability of the CNN was slightly reduced for this more complex case, it was still able to achieve seismic image and waveform misfits that were comparable to other commonly used starting models, despite not being trained on any three-layered models. As such, CNNs show great potential as tools for rapidly developing robust, site-specific starting models for near-surface elastic FWI.

     
    more » « less
  5. Batch Normalization (BN) is essential to effectively train state-of-the-art deep Convolutional Neural Networks (CNN). It normalizes the layer outputs during training using the statistics of each mini-batch. BN accelerates training procedure by allowing to safely utilize large learning rates and alleviates the need for careful initialization of the parameters. In this work, we study BN from the viewpoint of Fisher kernels that arise from generative probability models. We show that assuming samples within a mini-batch are from the same probability density function, then BN is identical to the Fisher vector of a Gaussian distribution. That means batch normalizing transform can be explained in terms of kernels that naturally emerge from the probability density function that models the generative process of the underlying data distribution. Consequently, it promises higher discrimination power for the batch-normalized mini-batch. However, given the rectifying non-linearities employed in CNN architectures, distribution of the layer outputs show an asymmetric characteristic. Therefore, in order for BN to fully benefit from the aforementioned properties, we propose approximating underlying data distribution not with one, but a mixture of Gaussian densities. Deriving Fisher vector for a Gaussian Mixture Model (GMM), reveals that batch normalization can be improved by independently normalizing with respect to the statistics of disentangled sub-populations. We refer to our proposed soft piecewise version of batch normalization as Mixture Normalization (MN). Through extensive set of experiments on CIFAR-10 and CIFAR-100, using both a 5-layers deep CNN and modern Inception-V3 architecture, we show that mixture normalization reduces required number of gradient updates to reach the maximum test accuracy of the batch normalized model by ∼31%-47% across a variety of training scenarios. Replacing even a few BN modules with MN in the 48-layers deep Inception-V3 architecture is sufficient to not only obtain considerable training acceleration but also better final test accuracy. We show that similar observations are valid for 40 and 100-layers deep DenseNet architectures as well. We complement our study by evaluating the application of mixture normalization to the Generative Adversarial Networks (GANs), where "mode collapse" hinders the training process. We solely replace a few batch normalization layers in the generator with our proposed mixture normalization. Our experiments using Deep Convolutional GAN (DCGAN) on CIFAR-10 show that mixture normalized DCGAN not only provides an acceleration of ∼58% but also reaches lower (better) "Fréchet Inception Distance" (FID) of 33.35 compared to 37.56 of its batch normalized counterpart. 
    more » « less