skip to main content


Title: AiRobSim: Simulating a Multisensor Aerial Robot for Urban Search and Rescue Operation and Training
Unmanned aerial vehicles (UAVs), equipped with a variety of sensors, are being used to provide actionable information to augment first responders’ situational awareness in disaster areas for urban search and rescue (SaR) operations. However, existing aerial robots are unable to sense the occluded spaces in collapsed structures, and voids buried in disaster rubble that may contain victims. In this study, we developed a framework, AiRobSim, to simulate an aerial robot to acquire both aboveground and underground information for post-disaster SaR. The integration of UAV, ground-penetrating radar (GPR), and other sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), and cameras, enables the aerial robot to provide a holistic view of the complex urban disaster areas. The robot-collected data can help locate critical spaces under the rubble to save trapped victims. The simulation framework can serve as a virtual training platform for novice users to control and operate the robot before actual deployment. Data streams provided by the platform, which include maneuver commands, robot states and environmental information, have potential to facilitate the understanding of the decision-making process in urban SaR and the training of future intelligent SaR robots.  more » « less
Award ID(s):
1850008
NSF-PAR ID:
10221533
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Sensors
Volume:
20
Issue:
18
ISSN:
1424-8220
Page Range / eLocation ID:
5223
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Industrial Internet of Things has increased the number of sensors permanently installed in industrial plants. Yet there will be gaps in coverage due to broken sensors or sparce density in very large plants, such as in the petrochemical industry. Modern emergency response operations are beginning to use Small Unmanned Aerial Systems (sUAS) as remote sensors to provide rapid improved situational awareness. Ground-based sensors are an integral component of overall situational awareness platforms, as they can provide longer-term persistent monitoring that aerial drones are unable to provide. Squishy Robotics and the Berkeley Emergent Space Tensegrities Laboratory have developed hardware and a framework for rapidly deploying sensor robots for integrated ground-aerial disaster response. The semi-autonomous delivery of sensors using tensegrity (tension-integrity) robotics uses structures that are flexible, lightweight, and have high stiffness-to-weight ratios, making them ideal candidates for robust high-altitude deployments. Squishy Robotics has developed a tensegrity robot for commercial use in Hazardous Materials (HazMat) scenarios that is capable of being deployed from commercial drones or other aircraft. Squishy Robots have been successfully deployed with a delicate sensing and communication payload of up to 1,000 ft. This paper describes the framework for optimizing the deployment of emergency sensors spatially over time. AI techniques (e.g., Long Short-Term Memory neural networks) identify regions where sensors would be most valued without requiring humans to enter the potentially dangerous area. The cost function for optimization considers costs of false-positive and false-negative errors. Decisions on mitigation include shutting down the plant or evacuating the local community. The Expected Value of Information (EVI) is used to identify the most valuable type and location of physical sensors to be deployed to increase the decision-analytic value of a sensor network. A case study using data from the Tennessee Eastman process dataset of a chemical plant displayed in OSI Soft is provided. 
    more » « less
  2. The 2021 Champlain Towers South Condominiums collapse in Surfside, Florida, resulted 98 deaths. Nine people are thought to have survived the initial collapse, and might have been rescued if rescue workers could have located them. Perhaps, if rescue workers had been able to use robots to search the interior of the rubble pile, outcomes might have been better. An improved understanding of the environment in which a robot would have to operate to be able to search the interior of a rubble pile would help roboticists develop better suited robotic platforms and control strategies. To this end, this work offers an approach to characterize and visualize the interior of a rubble pile and conduct a preliminary analysis of the occurrence of voids. Specifically, the analysis makes opportunistic use of four days of aerial imagery gathered from responders at Surfside to create a 3D volumetric aggregated model of the collapse in order to identify and characterize void spaces in the interior of the rubble. The preliminary results confirm expectations of small number and scale of these interior voids. The results can inform better selection and control of existing robots for disaster response, aid in determining the design specifications (specifically scale and form factor), and improve control of future robotic platforms developed for search operations in rubble. 
    more » « less
  3. Flood events have become intense and more frequent due to heavy rainfall and hurricanes caused by global warming. Accurate floodwater extent maps are essential information sources for emergency management agencies and flood relief programs to direct their resources to the most affected areas. Synthetic Aperture Radar (SAR) data are superior to optical data for floodwater mapping, especially in vegetated areas and in forests that are adjacent to urban areas and critical infrastructures. Investigating floodwater mapping with various available SAR sensors and comparing their performance allows the identification of suitable SAR sensors that can be used to map inundated areas in different land covers, such as forests and vegetated areas. In this study, we investigated the performance of polarization configurations for flood boundary delineation in vegetated and open areas derived from Sentinel1b, C-band, and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band data collected during flood events resulting from Hurricane Florence in the eastern area of North Carolina. The datasets from the sensors for the flooding event collected on the same day and same study area were processed and classified for five landcover classes using a machine learning method—the Random Forest classification algorithm. We compared the classification results of linear, dual, and full polarizations of the SAR datasets. The L-band fully polarized data classification achieved the highest accuracy for flood mapping as the decomposition of fully polarized SAR data allows land cover features to be identified based on their scattering mechanisms. 
    more » « less
  4. In this paper we derive a new capability for robots to measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight and unmapped environments, without requiring external infrastructure. We do so by capturing all of the paths that a WiFi signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to emulate antenna arrays in the air as a robot moves freely in 2D or 3D space. The small differences in the phase and amplitude of WiFi signals are thus processed with knowledge of a robots’ local displacements (often provided via inertial sensors) to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of i) a framework to accommodate arbitrary 2D and 3D trajectories, as well as continuous mobility of both transmitting and receiving robots, while computing AOA profiles between them and ii) an accompanying analysis that provides a lower bound on variance of AOA estimation as a function of robot trajectory geometry that is based on the Cramer Rao Bound and antenna array theory. This is a critical distinction with previous work on SAR that restricts robot mobility to prescribed motion patterns, does not generalize to the full 3D space, and/or requires transmitting robots to be static during data acquisition periods. In fact, we find that allowing robots to use their full mobility in 3D space while performing SAR, results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the trajectory; a computable quantity for which we derive a closed form. All theoretical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms. Our experimental results bolster our theoretical findings, demonstrating that 3D trajectories provide enhanced and consistent accuracy, with AOA error of less than 10 deg for 95% of trials. We also show that our formulation can be used with an off-the-shelf trajectory estimation sensor (Intel RealSense T265 tracking camera), for estimating the robots’ local displacements, and we provide theoretical as well as empirical results that show the impact of typical trajectory estimation errors on the measured AOA. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 square meter environment with occlusions. 
    more » « less
  5. Ishigami G., Yoshida K. (Ed.)
    This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments. 
    more » « less