skip to main content


Title: Extending Battery Life for Wi-Fi-Based IoT Devices: Modeling, Strategies, and Algorithm
Wi-Fi is one of the key wireless technologies for the Internet of things (IoT) owing to its ubiquity. Low-power operation of commercial Wi-Fi enabled IoT modules (typically powered by replaceable batteries) is critical in order to achieve a long battery life, while maintaining connectivity, and thereby reduce the cost and frequency of maintenance. In this work, we focus on commonly used sparse periodic uplink traffic scenario in IoT. Through extensive experiments with a state-of-the-art Wi-Fi enabled IoT module (Texas Instruments SimpleLink CC3235SF), we study the performance of the power save mechanism (PSM) in the IEEE 802.11 standard and show that the battery life of the module is limited, while running thin uplink traffic, to ~30% of its battery life on an idle connection, even when utilizing IEEE 802.11 PSM. Focusing on sparse uplink traffic, a prominent traffic scenario for IoT (e.g., periodic measurements, keep-alive mechanisms, etc.), we design a simulation framework for single-user sparse uplink traffic on ns-3, and develop a detailed and platform-agnostic accurate power consumption model within the framework and calibrate it to CC3235SF. Subsequently, we present five potential power optimization strategies (including standard IEEE 802.11 PSM) and analyze, with simulation results, the sensitivity of power consumption to specific network characteristics (e.g., round-trip time (RTT) and relative timing between TCP segment transmissions and beacon receptions) to present key insights. Finally, we propose a standard-compliant client-side cross-layer power saving optimization algorithm that can be implemented on client IoT modules. We show that the proposed optimization algorithm extends battery life by 24%, 26%, and 31% on average for sparse TCP uplink traffic with 5 TCP segments per second for networks with constant RTT values of 25 ms, 10 ms, and 5 ms, respectively.  more » « less
Award ID(s):
1813242
NSF-PAR ID:
10322925
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM International Symposium on Mobility Management and Wireless Access
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Apple Wireless Direct Link (AWDL) is a key protocol in Apple’s ecosystem used by over one billion iOS and macOS devices for device-to-device communications. AWDL is a proprietary extension of the IEEE 802.11 (Wi-Fi) standard and integrates with Bluetooth Low Energy (BLE) for providing services such as Apple AirDrop. We conduct the first security and privacy analysis of AWDL and its integration with BLE. We uncover several security and privacy vulnerabilities ranging from design flaws to implementation bugs leading to a man-in-the-middle (MitM) attack enabling stealthy modification of files transmitted via AirDrop, denial-of-service (DoS) attacks preventing communication, privacy leaks that enable user identification and long-term tracking undermining MAC address randomization, and DoS attacks enabling targeted or simultaneous crashing of all neighboring devices. The flaws span across AirDrop’s BLE discovery mechanism, AWDL synchronization, UI design, and Wi-Fi driver implementation. Our analysis is based on a combination of reverse engineering of protocols and code supported by analyzing patents. We provide proof-of-concept implementations and demonstrate that the attacks can be mounted using a low-cost ($20) micro:bit device and an off-the-shelf Wi-Fi card. We propose practical and effective countermeasures. While Apple was able to issue a fix for a DoS attack vulnerability after our responsible disclosure, the other security and privacy vulnerabilities require the redesign of some of their services. 
    more » « less
  2. Data files were used in support of the research paper titled “Mitigating RF Jamming Attacks at the Physical Layer with Machine Learning" which has been submitted to the IET Communications journal.

    ---------------------------------------------------------------------------------------------

    All data was collected using the SDR implementation shown here: https://github.com/mainland/dragonradio/tree/iet-paper. Particularly for antenna state selection, the files developed for this paper are located in 'dragonradio/scripts/:'

    • 'ModeSelect.py': class used to defined the antenna state selection algorithm
    • 'standalone-radio.py': SDR implementation for normal radio operation with reconfigurable antenna
    • 'standalone-radio-tuning.py': SDR implementation for hyperparameter tunning
    • 'standalone-radio-onmi.py': SDR implementation for omnidirectional mode only

    ---------------------------------------------------------------------------------------------

    Authors: Marko Jacovic, Xaime Rivas Rey, Geoffrey Mainland, Kapil R. Dandekar
    Contact: krd26@drexel.edu

    ---------------------------------------------------------------------------------------------

    Top-level directories and content will be described below. Detailed descriptions of experiments performed are provided in the paper.

    ---------------------------------------------------------------------------------------------

    classifier_training: files used for training classifiers that are integrated into SDR platform

    • 'logs-8-18' directory contains OTA SDR collected log files for each jammer type and under normal operation (including congested and weaklink states)
    • 'classTrain.py' is the main parser for training the classifiers
    • 'trainedClassifiers' contains the output classifiers generated by 'classTrain.py'

    post_processing_classifier: contains logs of online classifier outputs and processing script

    • 'class' directory contains .csv logs of each RTE and OTA experiment for each jamming and operation scenario
    • 'classProcess.py' parses the log files and provides classification report and confusion matrix for each multi-class and binary classifiers for each observed scenario - found in 'results->classifier_performance'

    post_processing_mgen: contains MGEN receiver logs and parser

    • 'configs' contains JSON files to be used with parser for each experiment
    • 'mgenLogs' contains MGEN receiver logs for each OTA and RTE experiment described. Within each experiment logs are separated by 'mit' for mitigation used, 'nj' for no jammer, and 'noMit' for no mitigation technique used. File names take the form *_cj_* for constant jammer, *_pj_* for periodic jammer, *_rj_* for reactive jammer, and *_nj_* for no jammer. Performance figures are found in 'results->mitigation_performance'

    ray_tracing_emulation: contains files related to Drexel area, Art Museum, and UAV Drexel area validation RTE studies.

    • Directory contains detailed 'readme.txt' for understanding.
    • Please note: the processing files and data logs present in 'validation' folder were developed by Wolfe et al. and should be cited as such, unless explicitly stated differently. 
      • S. Wolfe, S. Begashaw, Y. Liu and K. R. Dandekar, "Adaptive Link Optimization for 802.11 UAV Uplink Using a Reconfigurable Antenna," MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018, pp. 1-6, doi: 10.1109/MILCOM.2018.8599696.

    results: contains results obtained from study

    • 'classifier_performance' contains .txt files summarizing binary and multi-class performance of online SDR system. Files obtained using 'post_processing_classifier.'
    • 'mitigation_performance' contains figures generated by 'post_processing_mgen.'
    • 'validation' contains RTE and OTA performance comparison obtained by 'ray_tracing_emulation->validation->matlab->outdoor_hover_plots.m'

    tuning_parameter_study: contains the OTA log files for antenna state selection hyperparameter study

    • 'dataCollect' contains a folder for each jammer considered in the study, and inside each folder there is a CSV file corresponding to a different configuration of the learning parameters of the reconfigurable antenna. The configuration selected was the one that performed the best across all these experiments and is described in the paper.
    • 'data_summary.txt'this file contains the summaries from all the CSV files for convenience.
     
    more » « less
  3. Packet-level network simulators such as ns-3 require accurate physical (PHY) layer models for packet error rate (PER) for wideband transmission over fading wireless channels. To manage complexity and achieve practical runtimes, suitable link-to-system mappings can convert high fidelity PHY layer models for use by packet-level simulators. This work reports on two new contributions to the ns-3 Wi-Fi module, which presently only contains error models for Single Input Single Output (SISO), additive white Gaussian noise (AWGN) channels. To improve this, a complete implementation of a link-to-system mapping technique for IEEE 802.11 TGn fading channels is presented that involves a method for efficient generation of channel realizations within ns-3. The runtimes for the prior method suffers from scalability issues with increasing dimensionality of Multiple Input Multiple Output (MIMO) systems. We next propose a novel method to directly characterize the probability distribution of the"effective SNR" in link-to-system mapping. This approach is shown to require modest storage and not only reduces ns-3 runtime, it is also insensitive to growth of MIMO dimensionality. We describe the principles of this new method and provide details about its implementation, performance, and validation in ns-3. 
    more » « less
  4. null (Ed.)
    The Internet of Things (IoT) devices exchange certificates and authorization tokens over the IEEE 802.15.4 radio medium that supports a Maximum Transmission Unit (MTU) of 127 bytes. However, these credentials are significantly larger than the MTU and are therefore sent in a large number of fragments. As IoT devices are resource-constrained and battery-powered, there are considerable computations and communication overheads for fragment processing both on sender and receiver devices, which limit their ability to serve real-time requests. Moreover, the fragment processing operations increase energy consumption by CPUs and radio-transceivers, which results in shorter battery life. In this article, we propose CATComp -a compression-aware authorization protocol for Constrained Application Protocol (CoAP) and Datagram Transport Layer Security (DTLS) that enables IoT devices to exchange smallsized certificates and capability tokens over the IEEE 802.15.4 media. CATComp introduces additional messages in the CoAP and DTLS handshakes that allow communicating devices to negotiate a compression method, which devices use to reduce the credentials’ sizes before sending them over an IEEE 802.15.4 link. The decrease in the size of the security materials minimizes the total number of packet fragments, communication overheads for fragment delivery, fragment processing delays, and energy consumption. As such, devices can respond to requests faster and have longer battery life. We implement a prototype of CATComp on Contiki-enabled RE-Mote IoT devices and provide a performance analysis of CATComp. The experimental results show that communication latency and energy consumption are reduced when CATComp is integrated with CoAP and DTLS. 
    more » « less
  5. Tomorrow's massive-scale IoT sensor networks are poised to drive uplink traffic demand, especially in areas of dense deployment. To meet this demand, however, network designers leverage tools that often require accurate estimates of Channel State Information (CSI), which incurs a high overhead and thus reduces network throughput. Furthermore, the overhead generally scales with the number of clients, and so is of special concern in such massive IoT sensor networks. While prior work has used transmissions over one frequency band to predict the channel of another frequency band on the same link, this paper takes the next step in the effort to reduce CSI overhead: predict the CSI of a nearby but distinct link. We propose Cross-Link Channel Prediction (CLCP), a technique that leverages multi-view representation learning to predict the channel response of a large number of users, thereby reducing channel estimation overhead further than previously possible. CLCP's design is highly practical, exploiting existing transmissions rather than dedicated channel sounding or extra pilot signals. We have implemented CLCP for two different Wi-Fi versions, namely 802.11n and 802.11ax, the latter being the leading candidate for future IoT networks. We evaluate CLCP in two large-scale indoor scenarios involving both line-of-sight and non-line-of-sight transmissions with up to 144 different 802.11ax users and four different channel bandwidths, from 20 MHz up to 160 MHz. Our results show that CLCP provides a 2× throughput gain over baseline and a 30% throughput gain over existing prediction algorithms. 
    more » « less