skip to main content


Title: Deep AI Enabled Ubiquitous Wireless Sensing: A Survey
With the development of the Internet of Things (IoT), many kinds of wireless signals (e.g., Wi-Fi, LoRa, RFID) are filling our living and working spaces nowadays. Beyond communication, wireless signals can sense the status of surrounding objects, known as wireless sensing , with their reflection, scattering, and refraction while propagating in space. In the last decade, many sophisticated wireless sensing techniques and systems were widely studied for various applications (e.g., gesture recognition, localization, and object imaging). Recently, deep Artificial Intelligence (AI), also known as Deep Learning (DL), has shown great success in computer vision. And some works have initially proved that deep AI can benefit wireless sensing as well, leading to a brand-new step toward ubiquitous sensing. In this survey, we focus on the evolution of wireless sensing enhanced by deep AI techniques. We first present a general workflow of Wireless Sensing Systems (WSSs) which consists of signal pre-processing, high-level feature, and sensing model formulation. For each module, existing deep AI-based techniques are summarized, further compared with traditional approaches. Then, we provide a view of issues and challenges induced by combining deep AI and wireless sensing together. Finally, we discuss the future trends of deep AI to enable ubiquitous wireless sensing.  more » « less
Award ID(s):
1909177
NSF-PAR ID:
10293162
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Computing Surveys
Volume:
54
Issue:
2
ISSN:
0360-0300
Page Range / eLocation ID:
1 to 35
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Stress plays a critical role in our lives, impacting our productivity and our long-term physiological and psychological well-being. This has motivated the development of stress monitoring solutions to better understand stress, its impact on productivity and teamwork, and help users adapt their habits toward more sustainable stress levels. However, today's stress monitoring solutions remain obtrusive, requiring active user participation (e.g., self-reporting), interfering with people's daily activities, and often adding more burden to users looking to reduce their stress. In this paper, we introduce WiStress, the first system that can passively monitor a user's stress levels by relying on wireless signals. WiStress does not require users to actively provide input or to wear any devices on their bodies. It operates by transmitting ultra-low-power wireless signals and measuring their reflections off the user's body. WiStress introduces two key innovations. First, it presents the first machine learning network that can accurately and robustly extract heartbeat intervals (IBI's) from wireless reflections without constraints on a user's daily activities. Second, it introduces a stress classification framework that combines the extracted heartbeats with other wirelessly captured stress-related features in order to infer a subject's stress level. We built a prototype of WiStress and tested it on 22 different subjects across different environments in both stress-induced and free-living conditions. Our results demonstrate that WiStress has high accuracy (84%-95%) in inferring a person's stress level in a fully-automated way, paving the way for ubiquitous sensing systems that can monitor stress and provide feedback to improve productivity, health, and well-being. 
    more » « less
  2. Passive Remote Sensing services are indispensable in modern society because of the applications related to climate studies and earth science. Among those, NASA’s Soil Moisture Active and Passive (SMAP) mission provides an essential climate variable such as the moisture content of the soil by using microwave radiation within protected band over 1400-1427 MHz. However, because of the increasing active wireless technologies such as Internet of Things (IoT), unmanned aerial vehicles (UAV), and 5G wireless communication, the SMAP’s passive observations are expected to experience an increasing number of Radio Frequency Interference (RFI). RFI is a well-documented issue and SMAP has a ground processing unit dedicated to tackling this issue. However, advanced techniques are needed to tackle the increasing RFI problem for passive sensing systems and to jointly coexist communication and sensing systems. In this paper, we apply a deep learning approach where a novel Convolutional Neural Network (CNN) architecture for both RFI detection and mitigation is employed. SMAP Level 1A spectrogram of antenna counts and various moments data are used as the inputs to the deep learning architecture. We simulate different types of RFI sources such as pulsed, CW or wideband anthropogenic signals. We then use artificially corrupted SMAP Level 1B antenna measurements in conjunction with RFI labels to train the learning architecture. While the learned detection network classifies input spectrograms as RFI or no-RFI cases, the mitigation network reconstructs the RFI mitigated antenna temperature images. The proposed learning framework both takes advantage of the existing SMAP data and the simulated RFI scenarios. Future remote sensing systems such as radiometers will suffer an increasing RFI problem and spectrum sharing and techniques that will allow coexistance of sensing and communication systems will be utmost importance for both parties. RFI detection and mitigation will remain a prerequisite for these radiometers and the proposed deep learning approach has the potential to provide an additional perspective to existing solutions. We will present detailed analysis on the selected deep learning architecture, obtained RFI detection accuracy levels and RFI mitigation performance. 
    more » « less
  3. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do not have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA. 
    more » « less
  4. Realizing the vision of ubiquitous battery-free sensing has proven to be challenging, mainly due to the practical energy and range limitations of current wireless communication systems. To address this, we design the first wide-area and scalable backscatter network with multiple receivers (RX) and transmitters (TX) base units to communicate with battery-free sensor nodes. Our system circumvents the inherent limitations of backscatter systems--including the limited coverage area, frequency-dependent operability, and sensor node limitations in handling network tasks--by introducing several coordination techniques between the base units starting from a single RX-TX pair to networks with many RX and TX units. We build low-cost RX and TX base units and battery-free sensor nodes with multiple sensing modalities and evaluate the performance of the MultiScatter system in various deployments. Our evaluation shows that we can successfully communicate with battery-free sensor nodes across 23400 square feet of a two-floor educational complex using 5 RX and 20 TX units, costing $569. Also, we show that the aggregated throughput of the backscatter network increases linearly as the number of RX units and the network coverage grows. 
    more » « less
  5. Intelligent systems commonly employ vision sensors like cameras to analyze a scene. Recent work has proposed a wireless sensing technique, wireless vibrometry, to enrich the scene analysis generated by vision sensors. Wireless vibrometry employs wireless signals to sense subtle vibrations from the objects and infer their internal states. However, it is difficult for pure Radio-Frequency (RF) sensing systems to obtain objects' visual appearances (e.g., object types and locations), especially when an object is inactive. Thus, most existing wireless vibrometry systems assume that the number and the types of objects in the scene are known. The key to getting rid of these presumptions is to build a connection between wireless sensor time series and vision sensor images. We present Capricorn, a vision-guided wireless vibrometry system. In Capricorn, the object type information from vision sensors guides the wireless vibrometry system to select the most appropriate signal processing pipeline. The object tracking capability in computer vision also helps wireless systems efficiently detect and separate vibrations from multiple objects in real time. 
    more » « less