skip to main content


This content will become publicly available on May 22, 2024

Title: $\text{DAO}^\text{2}$: Overcoming Overall Storage Overflow in Intermittently Connected Sensor Networks
Many emerging sensor network applications operate in challenging environments wherein the base station is unavailable. Data generated from such intermittently connected sensor networks (ICSNs) must be stored inside the network for some unpredictable time before uploading opportunities become available. Consequently, sensory data could overflow the limited storage capacity available in the entire network, making discarding valuable data inevitable. To overcome such overall storage overflow in ICSNs, we propose and study a new algorithmic framework called data aggregation for overall storage overflow ( DAO2 ). Utilizing spatial data correlation that commonly exists among sensory data, DAO2 employs data aggregation techniques to reduce the overflow data size while minimizing the total energy consumption in data aggregation. At the core of our framework are two new graph theoretical problems that have not been studied. We refer to them as traveling salesmen placement problem ( TSP2 ) and quota traveling salesmen placement problem (Q- TSP2 ). Different from the well-known multiple traveling salesman problem (mTSP) and its variants, which mainly focus on the routing of multiple salesmen initially located at fixed locations, TSP2 and Q- TSP2 must decide the placement as well as the routing of the traveling salesmen. We prove that both problems are NP-hard and design approximation, heuristic, and distributed algorithms. Our algorithms outperform the state-of-the-art data aggregation work with base stations by up to 71.8% in energy consumption.  more » « less
Award ID(s):
2131309
NSF-PAR ID:
10471709
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEEXPLORE
Date Published:
Journal Name:
IEEE/ACM Transactions on Networking
Volume:
31
Issue:
6
ISSN:
1063-6692
Page Range / eLocation ID:
1 to 16
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We focus on sensor networks that are deployed in challenging environments, wherein sensors do not always have connected paths to a base station, and propose a new data resilience problem. We refer to it as DRE2: data resiliency in extreme environments. As there are no connected paths between sensors and the base station, the goal of DRE2 is to maximize data resilience by preserving the overflow data inside the network for maximum amount of time, considering that sensor nodes have limited storage capacity and unreplenishable battery power. We propose a quadratic programming-based algorithm to solve DRE2 optimally. As quadratic programming is NP-hard thus not scalable, we design two time efficient heuristics based on different network metrics. We show via extensive experiments that all algorithms can achieve high data resilience, while a minimum cost flow-based is most energy-efficient. Our algorithms tolerate node failures and network partitions caused by energy depletion of sensor nodes. Underlying our algorithms are flow networks that generalize the edge capacity constraint well-accepted in traditional network flow theory. 
    more » « less
  2. null (Ed.)
    We focus on sensor networks that are deployed in challenging environments, wherein sensors do not always have connected paths to a base station, and propose a new data resilience problem. We refer to it as DRE2: data resiliency in extreme environments. As there are no connected paths between sensors and the base station, the goal of DRE2 is to maximize data resilience by preserving the overflow data inside the network for maximum amount of time, considering that sensor nodes have limited storage capacity and unreplenishable battery power. We propose a quadratic programming-based algorithm to solve DRE2 optimally. As quadratic programming is NP-hard thus not scalable, we design two time efficient heuristics based on different network metrics. We show via extensive experiments that all algorithms can achieve high data resilience, while a minimum cost flow-based is most energy-efficient. Our algorithms tolerate node failures and network partitions caused by energy depletion of sensor nodes. Underlying our algorithms are flow networks that generalize the edge capacity constraint well-accepted in traditional network flow theory. 
    more » « less
  3. null (Ed.)
    The increasing availability of autonomous smallsize Unmanned Aerial Vehicles (UAVs) has provided a promising way for data gathering from Wireless Sensor Networks (WSNs) with the advantages of high mobility, flexibility, and good speed. However, few works considered the situations that multiple UAVs are collaboratively used and the fine-grained trajectory plans of multiple UAVs are devised for collecting data from network including detailed traveling and hovering plans of them in the continuous space. In this paper, we investigate the problem of the Fine-grained Trajectory Plan for multi-UAVs (FTP), in which m UAVs are used to collect data from a given WSN, where m ≥ 1. The problem entails not only to find the flight paths of multiple UAVs but also to design the detailed hovering and traveling plans on their paths for efficient data gathering from WSN. The objective of the problem is to minimize the maximum flight time of UAVs such that all sensory data of WSN is collected by the UAVs and transported to the base station. We first propose a mathematical model of the FTP problem and prove that the problem is NP-hard. To solve the FTP problem, we first study a special case of the FTP problem when m = 1, called FTP with Single UAV (FTPS) problem. Then we propose a constantfactor approximation algorithm for the FTPS problem. Based on the FTPS problem, an approximation algorithm for the general version of the FTP problem when m > 1 is further proposed, which can guarantee a constant factor of the optimal solution. Afterwards, the proposed algorithms are verified by extensive simulations. 
    more » « less
  4. In optical networks, simulation is a cost-efficient and powerful way for network planning and design. It helps researchers and network designers quickly obtain preliminary results on their network performance and easily adjust the design. Unfortunately, most optical simulators are not open-source and there is currently a lack of optical network simulation tools that leverage machine learning techniques for network simulation. Compared to Wavelength Division Multiplexing (WDM) networks, Elastic Optical Networks (EON) use finer channel spacing, a more flexible way of using spectrum resources, thus increasing the network spectrum efficiency. Network resource allocation is a popular research topic in optical networks. In EON, this problem is classified as Routing, Modulation and Spectrum Allocation (RMSA) problem, which aims to allocate sufficient network resources by selecting the optimal modulation format to satisfy a call request. SimEON is an open-source simulation tool exclusively for EON, capable of simulating different EON setup configurations, designing RMSA and regenerator placement/assignment algorithms. It could also be extended with proper modelings to simulate CapEx, OpEx and energy consumption for the network. Deep learning (DL) is a subset of Machine Learning, which employs neural networks, large volumes of data and various algorithms to train a model to solve complex problems. In this paper, we extended the capabilities of SimEON by integrating the DeepRMSA algorithm into the existing simulator. We compared the performance of conventional RMSA and DeepRMSA algorithms and provided a convenient way for users to compare different algorithms’ performance and integrate other machine learning algorithms. 
    more » « less
  5. Network cache allocation and management are important aspects of an Information-Centric Network (ICN) design, such as one based on Named Data Networking (NDN). We address the problem of optimal cache size allocation and content placement in an ICN in order to maximize the caching gain resulting from routing cost savings. While prior art assumes a given cache size at each network node and focuses on content placement, we study the problem when a global, network-wide cache storage budget is given and we solve for the optimal per-node cache allocation. This problem arises in cloud-based network settings where each network node is virtualized and housed within a cloud data center node with associated dynamic storage resources acquired from the cloud node as needed. As the offline centralized version of the optimal cache allocation problem is NP-hard, we develop a distributed adaptive algorithm that provides an approximate solution within a constant factor from the optimal. Performance evaluation of the algorithm is carried out through extensive simulations over multiple network topologies, demonstrating that our proposal significantly outperforms existing cache allocation algorithms. 
    more » « less