skip to main content


Title: Underwater Motion and Activity Recognition using Acoustic Wireless Networks
Underwater motion recognition using acoustic wireless networks has a promisingly potential to be applied to the diver activity monitoring and aquatic animal recognition without the burden of expensive underwater cameras which have been used by the image-based underwater classification techniques. However, accurately extracting features that are independent of the complicated underwater environments such as inhomogeneous deep seawater is a serious challenge for underwater motion recognition. Velocities of target body (VTB) during the motion are excellent environment independent features for WiFi-based recognition techniques in the indoor environments, however, VTB features are hard to be extracted accurately in the underwater environments. The inaccurate VTB estimation is caused by the fact that the signal propagates along with a curve instead of a straight line as the signal propagates in the air. In this paper, we propose an underwater motion recognition mechanism in the inhomogeneous deep seawater using acoustic wireless networks. To accurately extract velocities of target body features, we first derive Doppler Frequency Shift (DFS) coefficients that can be utilized for VTB estimation when signals propagate deviously. Secondly, we propose a dynamic self-refining (DSR) optimization algorithm with acoustic wireless networks that consist of multiple transmitter-receiver links to estimate the VTB. Those VTB features can be utilized to train the convolutional neural networks (CNN). Through the simulation, estimated VTB features are evaluated and the testing recognition results validate that our proposed underwater motion recognition mechanism is able to achieve high classification accuracy.  more » « less
Award ID(s):
1652502
NSF-PAR ID:
10230960
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE ICC 2020
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Driven by the development of machine learning and the development of wireless techniques, lots of research efforts have been spent on the human activity recognition (HAR). Although various deep learning algorithms can achieve high accuracy for recognizing human activities, existing works lack of a theoretical performance upper bound which is the best accuracy that is only limited by the influencing factors in wireless networks such as indoor physical environments and settings of wireless sensing devices regardless of any HAR algorithm. Without the understanding of performance upper bound, mistakenly configuring the influencing factors can reduce the HAR accuracy drastically no matter what deep learning algorithms are utilized. In this paper, we propose the HAR performance upper bound which is the minimum classification error probability that doesn't depend on any HAR algorithms and can be considered as a function of influencing factors in wireless sensing networks for CSI based human activity recognition. Since the performance upper bound can capture the impacts of influencing factors on HAR accuracy, we further analyze the influences of those factors with varying situations such as through the wall HAR and different human activities by MATLAB simulations. 
    more » « less
  2. With increasing needs of fast and reliable commu- nication between devices, wireless communication techniques are rapidly evolving to meet such needs. Multiple input and output (MIMO) systems are one of the key techniques that utilize multiple antennas for high-throughput and reliable communication. However, increasing the number of antennas in communication also adds to the complexity of channel esti- mation, which is essential to accurately decode the transmitted data. Therefore, development of accurate and efficient channel estimation methods is necessary. We report the performance of machine learning-based channel estimation approaches to enhance channel estimation performance in high-noise envi- ronments. More specifically, bit error rate (BER) performance of 2 × 2 and 4 × 4 MIMO communication systems with space- time block coding model (STBC) and two neural network-based channel estimation algorithms is analyzed. Most significantly, the results demonstrate that a generalized regression neural network (GRNN) model matches BER results of a known-channel communication for 4 × 4 MIMO with 8-bit pilots, when trained in a specific signal to noise ratio (SNR) regime. Moreover, up to 9dB improvement in signal-to-noise ratio (SNR) for a target BER is observed, compared to least square (LS) channel estimation, especially when the model is trained in the low SNR regime. A deep artificial neural network (Deep ANN) model shows worse BER performance compared to LS in all tested environments. These preliminary results present an opportunity for achieving better performance in channel estimation through GRNN and highlight further research topics for deployment in the wild. 
    more » « less
  3. In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios. 
    more » « less
  4. null (Ed.)
    Recently, significant efforts are made to explore device-free human activity recognition techniques that utilize the information collected by existing indoor wireless infrastructures without the need for the monitored subject to carry a dedicated device. Most of the existing work, however, focuses their attention on the analysis of the signal received by a single device. In practice, there are usually multiple devices "observing" the same subject. Each of these devices can be regarded as an information source and provides us an unique "view" of the observed subject. Intuitively, if we can combine the complementary information carried by the multiple views, we will be able to improve the activity recognition accuracy. Towards this end, we propose DeepMV, a unified multi-view deep learning framework, to learn informative representations of heterogeneous device-free data. DeepMV can combine different views' information weighted by the quality of their data and extract commonness shared across different environments to improve the recognition performance. To evaluate the proposed DeepMV model, we set up a testbed using commercialized WiFi and acoustic devices. Experiment results show that DeepMV can effectively recognize activities and outperform the state-of-the-art human activity recognition methods. 
    more » « less
  5. Wide-area soil moisture sensing is a key element for smart irrigation systems. However, existing soil moisture sensing methods usually fail to achieve both satisfactory mobility and high moisture estimation accuracy. In this paper, we present the design and implementation of a novel soil moisture sensing system, named as SoilId, that combines a UAV and a COTS IR-UWB radar for wide-area soil moisture sensing without the need of burying any battery-powered in-ground device. Specifically, we design a series of novel methods to help SoilId extract soil moisture related features from the received radar signals, and automatically detect and discard the data contaminated by the UAV's uncontrollable motion and the multipath interference. Furthermore, we leverage the powerful representation ability of deep neural networks and carefully design a neural network model to accurately map the extracted radar signal features to soil moisture estimations. We have extensively evaluated SoilId against a variety of real-world factors, including the UAV's uncontrollable motion, the multipath interference, soil surface coverages, and many others. Specifically, the experimental results carried out by our UAV-based system validate that SoilId can push the accuracy limits of RF-based soil moisture sensing techniques to a 50% quantile MAE of 0.23%.

     
    more » « less