skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Communication-Aware Robotics: Exploiting Motion for Communication
In this review, we present a comprehensive perspective on communication-aware robotics, an area that considers realistic communication environments and aims to jointly optimize communication and navigation. The main focus of the article is theoretical characterization and understanding of performance guarantees. We begin by summarizing the best prediction an unmanned vehicle can have of the channel quality at unvisited locations. We then consider the case of a single robot, showing how it can mathematically characterize the statistics of its traveled distance until connectivity and further plan its path to reach a connected location with optimality guarantees, in real channel environments and with minimum energy consumption. We then move to the case of multiple robots, showing how they can utilize their motions to enable robust information flow. We consider two specific robotic network configurations—robotic beamformers and robotic routers—and mathematically characterize properties of the co-optimum motion–communication decisions.  more » « less
Award ID(s):
2008449
PAR ID:
10311755
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Annual Review of Control, Robotics, and Autonomous Systems
Volume:
4
Issue:
1
ISSN:
2573-5144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets. 
    more » « less
  2. null (Ed.)
    Algorithms for noiseless collaborative PAC learning have been analyzed and optimized in recent years with respect to sample complexity. In this paper, we study collaborative PAC learning with the goal of reducing communication cost at essentially no penalty to the sample complexity. We develop communication efficient collaborative PAC learning algorithms using distributed boosting. We then consider the communication cost of collaborative learning in the presence of classification noise. As an intermediate step, we show how collaborative PAC learning algorithms can be adapted to handle classification noise. With this insight, we develop communication efficient algorithms for collaborative PAC learning robust to classification noise. 
    more » « less
  3. In modern machine learning, users often have to collaborate to learn distributions that generate the data. Communication can be a significant bottleneck. Prior work has studied homogeneous users—i.e., whose data follow the same discrete distribution—and has provided optimal communication-efficient methods. How- ever, these methods rely heavily on homogeneity, and are less applicable in the common case when users’ discrete distributions are heterogeneous. Here we consider a natural and tractable model of heterogeneity, where users’ discrete distributions only vary sparsely, on a small number of entries. We propose a novel two-stage method named SHIFT: First, the users collaborate by communicating with the server to learn a central distribution; relying on methods from robust statistics. Then, the learned central distribution is fine-tuned to estimate the indi- vidual distributions of users. We show that our method is minimax optimal in our model of heterogeneity and under communication constraints. Further, we provide experimental results using both synthetic data and n-gram frequency estimation in the text domain, which corroborate its efficiency. 
    more » « less
  4. Continuous real-time health monitoring in animals is essential for ensuring animal welfare. In ruminants like cows, rumen health is closely intertwined with overall animal health. Therefore, in-situ monitoring of rumen health is critical. However, this demands in-body to out-of-body communication of sensor data. In this paper, we devise a method of channel modeling for a cow using experiments and FEM based simulations at 400 MHz. This technique can be further employed across all frequencies to characterize the communication channel for the development of a channel architecture that efficiently exploits its properties. 
    more » « less
  5. null (Ed.)
    In this paper, we study communication-efficient decentralized training of large-scale machine learning models over a network. We propose and analyze SQuARM-SGD, a decentralized training algorithm, employing momentum and compressed communication between nodes regulated by a locally computable triggering rule. In SQuARM-SGD, each node performs a fixed number of local SGD (stochastic gradient descent) steps using Nesterov's momentum and then sends sparisified and quantized updates to its neighbors only when there is a significant change in its model parameters since the last time communication occurred. We provide convergence guarantees of our algorithm for strongly-convex and non-convex smooth objectives. We believe that ours is the first theoretical analysis for compressed decentralized SGD with momentum updates. We show that SQuARM-SGD converges at rate O(1/nT) for strongly-convex objectives, while for non-convex objectives it converges at rate O(1/√nT), thus matching the convergence rate of \emphvanilla distributed SGD in both these settings. We corroborate our theoretical understanding with experiments and compare the performance of our algorithm with the state-of-the-art, showing that without sacrificing much on the accuracy, SQuARM-SGD converges at a similar rate while saving significantly in total communicated bits. 
    more » « less