skip to main content


Title: Collaborative Hybrid ARQ for CDMA-based Reliable Underwater Acoustic Communications
Achieving high throughput and reliability in underwater acoustic networks is a challenging task due to the bandwidth-limited and unpredictable nature of the channel. In a multi-node structure, such as in the Internet of Underwater Things (IoUT), the efficiency of links varies dynamically because of the channel variations. When the channel is not in good condition, e.g., when in deep fade, channel-coding techniques fail to deliver the required information even with multiple rounds of retransmissions. An efficient and agile collaborative strategy among the nodes is required to assign appropriate resources to each link based on their status and capability. Hence, a cross-layer collaborative strategy is introduced to increase the throughput of the network by allocating unequal share of system resources to different nodes/links. The proposed solution adjusts the physical- and link-layer parameters in a collaborative manner for a Code Division Multiple Access (CDMA)-based underwater network. An adaptive Hybrid Automatic Repeat Request (HARQ) solution is employed to guarantee reliable communications against errors in poor communication links. Results are being validated using data collected from the LOON underwater testbed, which is hosted by the NATO STO Centre for Maritime Research and Experimentation (CMRE) in La Spezia, Italy.  more » « less
Award ID(s):
1763709
NSF-PAR ID:
10112863
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2018 Fourth Underwater Communications and Networking Conference (UComms)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Underwater networks of wireless sensors deployed along the coast or in the deep water are the most promising solution for the development of underwater monitoring, exploration and surveillance applications. A key feature of underwater networks that can significantly enhance current monitoring applications is the ability to accommodate real-time video information on an underwater communication link. In fact, while today monitoring relies on the exchange of simple discrete information, e.g., water temperature, and particle concentration, among others, by introducing real-time streaming capability of non-static images between wireless underwater nodes one can completely revolutionize the whole underwater monitoring scenario. To achieve this goal, underwater links are required to support a sufficiently high data rate, compatible with the streaming rates of the transmitted video sequence. Unfortunately, the intrinsic characteristic of the underwater propagation medium has made this objective extremely challenging. In this paper, we present the first physical layer transmission scheme for short-range and high-data rate ultrasonic underwater communications. The proposed solution, which we will refer to as Underwater UltraSonar (U2S), is based on the idea of transmitting short information-bearing carrierless ultrasonic signals, e.g., pulses, following a pseudo-random adaptive time-hopping pattern with a superimposed rate-adaptive Reed-Solomon forward error correction (FEC) channel coding. We also present the design of the first prototype of a software-defined underwater ultrasonic transceiver that implements U2S PHY transmission scheme through which we evaluate the U2S performance in real-scenario underwater experiments at the PHY layer, i.e., Bit Error Rate (BER) extensively, and at the application layer, i.e., structural similarity (SSIM) index. Results show that U2S links can support point-to-point data rate up to 1.38 Mbps and that by leveraging the flexibility of the adaptive time-hopping and adaptive channel coding techniques, one can trade between link throughput and energy consumption, still satisfying application layer requirements. 
    more » « less
  2. Achieving reliable acoustic wireless video transmissions in the extreme and uncertain underwater environment is a challenge due to the limited bandwidth and the error-prone nature of the channel. Aiming at optimizing the received video quality and the user's experience, an adaptive solution for underwater video transmissions is proposed that is specifically designed for Multi-Input Multi-Output (MIMO -based Software-Defined Acoustic Modems (SDAMs . To keep the video distortion under an acceptable threshold and to keep the Physical-Layer Throughput (PLT high, cross-layer techniques utilizing diversity-spatial multiplexing and Unequal Error Protection (UEP are presented along with the scalable video compression at the application layer. Specifically, the scalability of the utilized SDAM with high processing capabilities is exploited in the proposed structure along with the temporal, spatial, and quality scalability of the Scalable Video Coding (SVC H.264/MPEG-4 AVC compression standard. The transmitter broadcasts one video stream and realizes multicasting at different users. Experimental results at the Sonny Werblin Recreation Center, Rutgers University-NJ, are presented. Several scenarios for unknown channels at the transmitter are experimentally considered when the hydrophones are placed in different locations in the pool to achieve the required SVC-based video Quality of Service (QoS and Quality of Experience (QoE given the channel state information and the robustness of different SVC scalability. The video quality level is determined by the best communication link while the transmission scheme is decided based on the worst communication link, which guarantees that each user is able to receive the video with appropriate quality. 
    more » « less
  3. The high reliability required by many future-generation network services can be enforced by proper resource assignments by means of logical partitions, i.e., network slices, applied in optical metro-aggregation networks. Different strategies can be applied to deploy the virtual network functions (VNFs) composing the slices over physical nodes, while providing different levels of resource isolation (among slices) and protection against failures, based on several available techniques. Considering that, in optical metro-aggregation networks, protection can be ensured at different layers, and the slice protection with traffic grooming calls for evolved multilayer protection approaches. In this paper, we investigate the problem of reliable slicing with protection at the lightpath layer for different levels of slice isolation and different VNF deployment strategies. We model the problem through an integer linear program (ILP), and we devise a heuristic for joint optimization of VNF placement and ligthpath selection. The heuristic maps nodes and links over the physical network in a coordinated manner and provides an effective placement of radio access network functions and the routing and wavelength assignment for the optical layer. The effectiveness of the proposed heuristic is validated by comparison with the optimal solution provided by the ILP. Our illustrative numerical results compare the impact of different levels of isolation, showing that higher levels of network and VNF isolation are characterized by higher costs in terms of optical and computation resources.

     
    more » « less
  4. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The goal of this work was to design a low-cost computing facility that can support the development of an open source digital pathology corpus containing 1M images [1]. A single image from a clinical-grade digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. A 1M image database requires over a petabyte (PB) of disk space. To do meaningful work in this problem space requires a significant allocation of computing resources. The improvements and expansions to our HPC (highperformance computing) cluster, known as Neuronix [2], required to support working with digital pathology fall into two broad categories: computation and storage. To handle the increased computational burden and increase job throughput, we are using Slurm [3] as our scheduler and resource manager. For storage, we have designed and implemented a multi-layer filesystem architecture to distribute a filesystem across multiple machines. These enhancements, which are entirely based on open source software, have extended the capabilities of our cluster and increased its cost-effectiveness. Slurm has numerous features that allow it to generalize to a number of different scenarios. Among the most notable is its support for GPU (graphics processing unit) scheduling. GPUs can offer a tremendous performance increase in machine learning applications [4] and Slurm’s built-in mechanisms for handling them was a key factor in making this choice. Slurm has a general resource (GRES) mechanism that can be used to configure and enable support for resources beyond the ones provided by the traditional HPC scheduler (e.g. memory, wall-clock time), and GPUs are among the GRES types that can be supported by Slurm [5]. In addition to being able to track resources, Slurm does strict enforcement of resource allocation. This becomes very important as the computational demands of the jobs increase, so that they have all the resources they need, and that they don’t take resources from other jobs. It is a common practice among GPU-enabled frameworks to query the CUDA runtime library/drivers and iterate over the list of GPUs, attempting to establish a context on all of them. Slurm is able to affect the hardware discovery process of these jobs, which enables a number of these jobs to run alongside each other, even if the GPUs are in exclusive-process mode. To store large quantities of digital pathology slides, we developed a robust, extensible distributed storage solution. We utilized a number of open source tools to create a single filesystem, which can be mounted by any machine on the network. At the lowest layer of abstraction are the hard drives, which were split into 4 60-disk chassis, using 8TB drives. To support these disks, we have two server units, each equipped with Intel Xeon CPUs and 128GB of RAM. At the filesystem level, we have implemented a multi-layer solution that: (1) connects the disks together into a single filesystem/mountpoint using the ZFS (Zettabyte File System) [6], and (2) connects filesystems on multiple machines together to form a single mountpoint using Gluster [7]. ZFS, initially developed by Sun Microsystems, provides disk-level awareness and a filesystem which takes advantage of that awareness to provide fault tolerance. At the filesystem level, ZFS protects against data corruption and the infamous RAID write-hole bug by implementing a journaling scheme (the ZFS intent log, or ZIL) and copy-on-write functionality. Each machine (1 controller + 2 disk chassis) has its own separate ZFS filesystem. Gluster, essentially a meta-filesystem, takes each of these, and provides the means to connect them together over the network and using distributed (similar to RAID 0 but without striping individual files), and mirrored (similar to RAID 1) configurations [8]. By implementing these improvements, it has been possible to expand the storage and computational power of the Neuronix cluster arbitrarily to support the most computationally-intensive endeavors by scaling horizontally. We have greatly improved the scalability of the cluster while maintaining its excellent price/performance ratio [1]. 
    more » « less
  5. Wireless networks are being applied in various industrial sectors, and they are posed to support mission-critical industrial IoT applications which require ultra-reliable, low-latency communications (URLLC). Ensuring predictable per-packet communication reliability is a basis of predictable URLLC, and scheduling and power control are two basic enablers. Scheduling and power control, however, are subject to challenges such as harsh environments, dynamic channels, and distributed network settings in industrial IoT. Existing solutions are mostly based on heuristic algorithms or asymptotic analysis of network performance, and there lack field-deployable algorithms for ensuring predictable per-packet reliability. Towards addressing the gap, we examine the cross-layer design of joint scheduling and power control and analyze the associated challenges. We introduce the Perron–Frobenius theorem to demonstrate that scheduling is a must for ensuring predictable communication reliability, and by investigating characteristics of interference matrices, we show that scheduling with close-by links silent effectively constructs a set of links whose required reliability is feasible with proper transmission power control. Given that scheduling alone is unable to ensure predictable communication reliability while ensuring high throughput and addressing fast-varying channel dynamics, we demonstrate how power control can help improve both the reliability at each time instant and throughput in the long-term. Based on the analysis, we propose a candidate framework of joint scheduling and power control, and we demonstrate how this framework behaves in guaranteeing per-packet communication reliability in the presence of wireless channel dynamics of different time scales. Collectively, these findings provide insight into the cross-layer design of joint scheduling and power control for ensuring predictable per-packet reliability in the presence of wireless network dynamics and uncertainties. 
    more » « less