The problem of sound source localization has attracted the interest of researchers from different disciplines ranging from biology to robotics and navigation. It is in essence an estimation problem trying to estimate the location of the sound source using the information available to sound receivers. It is common practice to design Bayesian estimators based on a dynamic model of the system. Nevertheless, in some practical situations, such a dynamic model may not be available in the case of a moving sound source and instead, some a priori information about the sound source may be known. This paper considers a case study of designing an estimator using available a priori information, along with measurement signals received from a bearing-only sensor, to track a moving sound source in two dimensions.
more »
« less
Bearing-Only Localization of a Quasi-Static Sound Source With a Binaural Microphone Array
Abstract Sound source localization is the ability to successfully understand the bearing and distance of a sound in space. The challenge of sound source localization has been a major are of research for engineers, especially those studying robotics, for decades. One of the main topics of focus is the ability for robots to track objects, human voices, or other robots robustly and accurately. Common ways to accomplish this goal may use large arrays, computationally intensive machine learning methods, or known dynamic models of a system which may not always be available. We seek to simplify this problem using a minimal amount of inexpensive equipment alongside a Bayesian estimator, capable of localizing an emitter using easily available a-priori information and timing data received from a prototype binaural sensor. We perform an experiment in a full anechoic chamber with a sound source moving at a constant speed; this experimental environment provides a space that allows us to isolate the performance of the sensor. We find that, while our current system isn’t perfect, it is able to track the general motion of a sound source and the path to even more accurate tracking in the future is clear.
more »
« less
- Award ID(s):
- 1751498
- PAR ID:
- 10215497
- Date Published:
- Journal Name:
- ASME Dynamic Systems and Control Conference
- Page Range / eLocation ID:
- DSCC2020-3235
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents the WiFi-Sensor-for-Robotics (WSR) toolbox, an open source C++ framework. It enables robots in a team to obtain relative bearing to each other, even in non-line-of-sight (NLOS) settings which is a very challenging problem in robotics. It does so by analyzing the phase of their communicated WiFi signals as the robots traverse the environment. This capability, based on the theory developed in our prior works, is made available for the first time as an opensource tool. It is motivated by the lack of easily deployable solutions that use robots' local resources (e.g WiFi) for sensing in NLOS. This has implications for localization, ad-hoc robot networks, and security in multi-robot teams, amongst others. The toolbox is designed for distributed and online deployment on robot platforms using commodity hardware and on-board sensors. We also release datasets demonstrating its performance in NLOS and line-of-sight (LOS) settings for a multi-robot localization usecase. Empirical results show that the bearing estimation from our toolbox achieves mean accuracy of 5.10 degrees. This leads to a median error of 0.5m and 0.9m for localization in LOS and NLOS settings respectively, in a hardware deployment in an indoor office environment.more » « less
-
This paper presents SVIn2, a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system, which fuses Scanning Profiling Sonar, Visual, Inertial, and water-pressure information in a non-linear optimization framework for small and large scale challenging underwater environments. The developed real-time system features robust initialization, loop-closing, and relocalization capabilities, which make the system reliable in the presence of haze, blurriness, low light, and lighting variations, typically observed in underwater scenarios. Over the last decade, Visual-Inertial Odometry and SLAM systems have shown excellent performance for mobile robots in indoor and outdoor environments, but often fail underwater due to the inherent difficulties in such environments. Our approach combats the weaknesses of previous approaches by utilizing additional sensors and exploiting their complementary characteristics. In particular, we use (1) acoustic range information for improved reconstruction and localization, thanks to the reliable distance measurement; (2) depth information from water-pressure sensor for robust initialization, refining the scale, and assisting to limit the drift in the tightly-coupled integration. The developed software—made open source—has been successfully used to test and validate the proposed system in both benchmark datasets and numerous real world underwater scenarios, including datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle Aqua2. SVIn2 demonstrated outstanding performance in terms of accuracy and robustness on those datasets and enabled other robotic tasks, for example, planning for underwater robots in presence of obstacles.more » « less
-
null (Ed.)The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method for estimating the pose of an indoor robot that successfully enables navigation to specified waypoints.more » « less
-
Abstract Infrasound (low frequency sound waves) can be used to monitor and characterize volcanic eruptions. However, infrasound sensors are usually placed on the ground, thus providing a limited sampling of the acoustic radiation pattern that can bias source size estimates. We present observations of explosive eruptions from a novel uncrewed aircraft system (UAS)‐based infrasound sensor platform that was strategically hovered near the active vents of Stromboli volcano, Italy. We captured eruption infrasound from short‐duration explosions and jetting events. While potential vertical directionality was inconclusive for the short‐duration explosion, we find that jetting events exhibit vertical sound directionality that was observed with a UAS close to vertical. This directionality would not have been observed using only traditional deployments of ground‐based infrasound sensors, but is consistent with jet noise theory. This proof‐of‐concept study provides unique information that can improve our ability to characterize and quantify the directionality of volcanic eruptions and their associated hazards.more » « less
An official website of the United States government

