skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Resilient Collision-tolerant Navigation in Confined Environments
This work presents the design and autonomous navigation policy of the Resilient Micro Flyer, a new type of collision-tolerant robot tailored to fly through extremely confined environments and manhole-sized tubes. The robot maintains a low weight (<500g) and implements a combined rigid-compliant design through the integration of elastic flaps around its stiff collision-tolerant frame. These passive flaps ensure compliant collisions, contact sensing and smooth navigation in contact with the environment. Focusing on resilient autonomy, capable of running on resource-constrained hardware, we demonstrate the beneficial role of compliant collisions for the reliability of the onboard visual-inertial odometry and propose a safe navigation policy that exploits both collision-avoidance using lightweight time-of-flight sensing and adaptive control in response to collisions. The robot further realizes an explicit manhole navigation mode that exploits the direct mechanical feedback provided by the flaps and a special navigation strategy to self-align inside manholes with non-straight geometry. Comprehensive experimental studies are presented to evaluate, both individually and as a whole, how resilience is achieved based on the robot design and its navigation scheme.  more » « less
Award ID(s):
2008904
PAR ID:
10296723
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2021 IEEE International Conference on Robotics and Automation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots, where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. As a result, constraining paths of the robot to not interact with the environment is not necessary and may be limiting. In this article, we mathematically formalize interactions of a soft growing robot with a planar environment in an empirical kinematic model. Using this interaction model, we develop a method to plan paths for the robot to a destination. Rather than avoiding contact with the environment, the planner exploits obstacle contact when beneficial for navigation. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact. 
    more » « less
  2. For many types of robots, avoiding obstacles is necessary to prevent damage to the robot and environment. As a result, obstacle avoidance has historically been an im- portant problem in robot path planning and control. Soft robots represent a paradigm shift with respect to obstacle avoidance because their low mass and compliant bodies can make collisions with obstacles inherently safe. Here we consider the benefits of intentional obstacle collisions for soft robot navigation. We develop and experimentally verify a model of robot-obstacle interaction for a tip-extending soft robot. Building on the obstacle interaction model, we develop an algorithm to determine the path of a growing robot that takes into account obstacle collisions. We find that obstacle collisions can be beneficial for open-loop navigation of growing robots because the obstacles passively steer the robot, both reducing the uncertainty of the location of the robot and directing the robot to targets that do not lie on a straight path from the starting point. Our work shows that for a robot with predictable and safe interactions with obstacles, target locations in a cluttered, mapped environment can be reached reliably by simply setting the initial trajectory. This has implications for the control and design of robots with minimal active steering. 
    more » « less
  3. We propose VLM-Social-Nav, a novel Vision-Language Model (VLM) based navigation approach to compute a robot's motion in human-centered environments. Our goal is to make real-time decisions on robot actions that are socially compliant with human expectations. We utilize a perception model to detect important social entities and prompt a VLM to generate guidance for socially compliant robot behavior. VLM-Social-Nav uses a VLM-based scoring module that computes a cost term that ensures socially appropriate and effective robot actions generated by the underlying planner. Our overall approach reduces reliance on large training datasets and enhances adaptability in decision-making. In practice, it results in improved socially compliant navigation in human-shared environments. We demonstrate and evaluate our system in four different real-world social navigation scenarios with a Turtlebot robot. We observe at least 27.38% improvement in the average success rate and 19.05% improvement in the average collision rate in the four social navigation scenarios. Our user study score shows that VLM-Social-Nav generates the most socially compliant navigation behavior. 
    more » « less
  4. Humans are well-adept at navigating public spaces shared with others, where current autonomous mobile robots still struggle: while safely and efficiently reaching their goals, humans communicate their intentions and conform to unwritten social norms on a daily basis; conversely, robots become clumsy in those daily social scenarios, getting stuck in dense crowds, surprising nearby pedestrians, or even causing collisions. While recent research on robot learning has shown promises in data-driven social robot navigation, good-quality training data is still difficult to acquire through either trial and error or expert demonstrations. In this work, we propose to utilize the body of rich, widely available, social human navigation data in many natural human-inhabited public spaces for robots to learn similar, human-like, socially compliant navigation behaviors. To be specific, we design an open-source egocentric data collection sensor suite wearable by walking humans to provide multimodal robot perception data; we collect a large-scale (~100 km, 20 hours, 300 trials, 13 humans) dataset in a variety of public spaces which contain numerous natural social navigation interactions; we analyze our dataset, demonstrate its usability, and point out future research directions and use cases.11Website: https://cs.gmu.edu/-xiao/Research/MuSoHu/ 
    more » « less
  5. To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed. 
    more » « less