Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
A Comparison of Point Cloud Registration Techniques for On-Site Disaster Data from the Surfside Structural Collapse3D representations of geographical surfaces in the form of dense point clouds can be a valuable tool for documenting and reconstructing a structural collapse, such as the 2021 Champlain Towers Condominium collapse in Surfside, Florida. Point cloud data reconstructed from aerial footage taken by uncrewed aerial systems at frequent intervals from a dynamic search and rescue scene poses significant challenges. Properly aligning large point clouds in this context, or registering them, poses noteworthy issues as they capture multiple regions whose geometries change over time. These regions denote dynamic features such as excavation machinery, cones marking boundaries and the structural collapse rubble itself. In this paper, the performances of commonly used point cloud registration methods for dynamic scenes present in the raw data are studied. The use of Iterative Closest Point (ICP), Rigid - Coherent Point Drift (CPD) and PointNetLK for registering dense point clouds, reconstructed sequentially over a time- frame of five days, is studied and evaluated. All methods are compared by error in performance, execution time, and robustness with a concluding analysis and a judgement of the preeminent method for the specific data at hand is provided.Free, publicly-accessible full text available October 1, 2023
Recent reinforcement learning (RL) approaches have shown strong performance in complex domains such as Atari games, but are often highly sample inefficient. A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal. However, designing appropriate shaping rewards is known to be difficult as well as time-consuming. In this work, we address this problem by using natural language instructions to perform reward shaping. We propose the LanguagE-Action Reward Network (LEARN), a framework that maps free-form natural language instructions to intermediate rewards based on actions taken by the agent. These intermediate language-based rewards can seamlessly be integrated into any standard reinforcement learning algorithm. We experiment with Montezuma’s Revenge from the Atari Learning Environment, a popular benchmark in RL. Our experiments on a diverse set of 15 tasks demonstrate that, for the same number of interactions with the environment, language-based rewards lead to successful completion of the task 60 % more often on average, compared to learning without language.