skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2348818

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Due to the insufficient transient amount of energy supplied from ambient energy sources and constrained amount of energy storage in super-capacitors, energy harvesting (EH) nodes are limited with operations and vulnerable to frequent faults due to energy scarcity. Consequently, such faults will reduce reliability and energy utility due to data collisions, lost data, or idle listening. To address these challenges, this work implements a novelty task scheduling scheme to minimize energy waste and maximize throughput under these scenarios and constraints. To demonstrate the effectiveness, we use a green test bed using LoRa nodes for evaluation. 
    more » « less
  2. The emerging unmanned aerial vehicle (UAV) such as a quadcopter offers a reliable, controllable, and flexible way of ferrying information from energy harvesting powered IoT devices in remote areas to the IoT edge servers. Nonetheless, the employment of UAVs faces a major challenge which is the limited fly range due to the necessity for recharging, especially when the charging stations are situated at considerable distances from the monitoring area, resulting in inefficient energy usage. To mitigate these challenges, we proposed to place multiple charging stations in the field and each is equipped with a powerful energy harvester and acting as a cluster head to collect data from the sensor node under its jurisdiction. In this way, the UAV can remain in the field continuously and get the data while charging. However, the intermittent and unpredictable nature of energy harvesting can render stale or even obsolete information stored at cluster heads. To tackle this issue, in this work, we proposed a Deep Reinforcement Learning (DRL) based path planning for UAVs. The DRL agent will gather the global information from the UAV to update its input environmental states for outputting the location of the next stop to optimize the overall age of information of the whole network. The experiments show that the proposed DDQN can significantly reduce the age of information (AoI) by 3.7% reliably compared with baseline techniques. 
    more » « less
  3. As the next-generation battery substitute for IoT system, energy harvesting (EH) technology revolutionizes the IoT industry with environmental friendliness, ubiquitous accessibility, and sustainability, which enables various self-sustaining IoT applications. However, due to the weak and intermittent nature of EH power, the performance of EH-powered IoT systems as well as its collaborative routing mechanism can severely deteriorate, rendering unpleasant data package loss during each power failure. Such a phenomenon makes conventional routing policies and energy allocation strategies impractical. Given the complexity of the problem, reinforcement learning (RL) appears to be one of the most promising and applicable methods to address this challenge. Nevertheless, although the energy allocation and routing policy are jointly optimized by the RL method, due to the energy restriction of EH devices, the inappropriate configuration of multi-hop network topology severely degrades the data collection performance. Therefore, this article first conducts a thorough mathematical discussion and develops the topology design and validation algorithm under energy harvesting scenarios. Then, this article developsDeepIoTRouting, a distributed and scalable deep reinforcement learning (DRL)-based approach, to address the routing and energy allocation jointly for the energy harvesting powered distributed IoT system. The experimental results show that with topology optimization,DeepIoTRoutingachieves at least 38.71% improvement on the amount of data delivery to sink in a 20-device IoT network, which significantly outperforms state-of-the-art methods. 
    more » « less