skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The DOI auto-population feature in the Public Access Repository (PAR) will be unavailable from 4:00 PM ET on Tuesday, July 8 until 4:00 PM ET on Wednesday, July 9 due to scheduled maintenance. We apologize for the inconvenience caused.


Title: STRUCTURAL HEALTH MONITORING WITH ROBOT AND AUGMENTED REALITY TEAMS
Mobile robots can access regions and collect data in structural locations not easily reached by humans. This includes confined spaces, such as inside walls, and underground pipes; and remote spaces, such as the underside of bridge decks. Robot access provides the opportunity to sense in these difficult to access spaces with robot mounted sensors, i.e. cameras and lidars, and with the robot placing and servicing standalone sensors. Teams of robots, sensors and AR-equipped humans have the potential to provide rapid and more comprehensive structural assessments. This paper presents results of studies using small robots to explore and collect structural condition data from remote and confined spaces including in walls, culverts, and bridge deck undersides. The presentation also covers system and network architecture, methods for automating data processing with localized and edge-based processors, the use of augmented reality (AR) human interfaces and discusses key technical challenges and possible solutions.  more » « less
Award ID(s):
2119485
PAR ID:
10545573
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Destech Publications, Inc.
Date Published:
ISBN:
9781605956930
Subject(s) / Keyword(s):
robots, structural health monitoring, augmented reality
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Due to their ability to move without sliding relative to their environment, soft growing robots are attractive for deploying distributed sensor networks in confined spaces. Sensing of the state of such robots would add to their capabilities as human-safe, adaptable manipulators. However, incorporation of distributed sensors onto soft growing robots is challenging because it requires an interface between stiff and soft materials, and the sensor network needs to undergo significant strain. In this work, we present a method for adding sensors to soft growing robots that uses flexible printed circuit boards with self-contained units of microcontrollers and sensors encased in a laminate armor that protects them from unsafe curvatures. We demonstrate the ability of this system to relay directional temperature and humidity information in hard-to-access spaces. We also demonstrate and characterize a method for sensing the growing robot shape using inertial measurement units deployed along its length, and develop a mathematical model to predict its accuracy. This work advances the capabilities of soft growing robots, as well as the field of soft robot sensing. 
    more » « less
  2. Humans are well-adept at navigating public spaces shared with others, where current autonomous mobile robots still struggle: while safely and efficiently reaching their goals, humans communicate their intentions and conform to unwritten social norms on a daily basis; conversely, robots become clumsy in those daily social scenarios, getting stuck in dense crowds, surprising nearby pedestrians, or even causing collisions. While recent research on robot learning has shown promises in data-driven social robot navigation, good-quality training data is still difficult to acquire through either trial and error or expert demonstrations. In this work, we propose to utilize the body of rich, widely available, social human navigation data in many natural human-inhabited public spaces for robots to learn similar, human-like, socially compliant navigation behaviors. To be specific, we design an open-source egocentric data collection sensor suite wearable by walking humans to provide multimodal robot perception data; we collect a large-scale (~100 km, 20 hours, 300 trials, 13 humans) dataset in a variety of public spaces which contain numerous natural social navigation interactions; we analyze our dataset, demonstrate its usability, and point out future research directions and use cases.11Website: https://cs.gmu.edu/-xiao/Research/MuSoHu/ 
    more » « less
  3. Ishigami G., Yoshida K. (Ed.)
    This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments. 
    more » « less
  4. null (Ed.)
    Pneumatically operated soft growing robots that extend via tip eversion are well-suited for navigation in confined spaces. Adding the ability to interact with the environment using sensors and tools attached to the robot tip would greatly enhance the usefulness of these robots for exploration in the field. However, because the material at the tip of the robot body continually changes as the robot grows and retracts, it is challenging to keep sensors and tools attached to the robot tip during actuation and environment interaction. In this paper, we analyze previous designs for mounting to the tip of soft growing robots, and we present a novel device that successfully remains attached to the robot tip while providing a mounting point for sensors and tools. Our tip mount incorporates and builds on our previous work on a device to retract the robot without undesired buckling of its body. Using our tip mount, we demonstrate two new soft growing robot capabilities: (1) pulling on the environment while retracting, and (2) retrieving and delivering objects. Finally, we discuss the limitations of our design and opportunities for improvement in future soft growing robot tip mounts. 
    more » « less
  5. The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels. 
    more » « less