A methodology to learn acoustical responses based on limited experimental datasets is presented. From a methodological standpoint, the approach involves a multiscale-informed encoder used to cast the learning task in a finite-dimensional setting. A neural network model mapping parameters of interest to the latent variables is then constructed and calibrated using transfer learning and knowledge gained from the multiscale surrogate. The relevance of the approach is assessed by considering the prediction of the sound absorption coefficient for randomly-packed rigid spherical beads of equal diameter. A two-microphone method is used in this context to measure the absorption coefficient on a set of configurations with various monodisperse particle diameters and sample thicknesses, and a hybrid numerical approach relying on the Johnson-Champoux-Allard-Pride-Lafarge model is deployed as the multiscale-based predictor. It is shown that the strategy allows for the relationship between the micro-/structural parameters and the experimental acoustic response to be well approximated, even if a small physical dataset (comprised of ten samples) is used for training. The methodology, therefore, enables the identification and validation of acoustical models under constraints related to data limitation and parametric dependence. It also paves the way for an efficient exploration of the parameter space for acoustical materials design.
more »
« less
Embedded Object Detection and Mapping in Soft Materials Using Optical Tactile Sensing
Abstract In this paper, we present a methodology that uses an optical tactile sensor for efficient tactile exploration of embedded objects within soft materials. The methodology consists of an exploration phase, where a probabilistic estimate of the location of the embedded objects is built using a Bayesian approach. The exploration phase is then followed by a mapping phase which exploits the probabilistic map to reconstruct the underlying topography of the workspace by sampling in more detail regions where there are expected to be embedded objects. To demonstrate the effectiveness of the method, we tested our approach on an experimental setup that consists of a series of quartz beads located underneath a polyethylene foam that prevents direct observation of the configuration and requires the use of tactile exploration to recover the location of the beads. We show the performance of our methodology using ten different configurations of the beads where the proposed approach is able to approximate the underlying configuration. We benchmark our results against a random sampling policy. Our empirical results show that our method outperforms the fully random policy in both the exploration and mapping phases. The exploration phase produces a better probabilistic map with fewer samples which enables an earlier transition to the mapping phase to reconstruct the underlying shape. On both the exploration and mapping phases, our proposed method presents a better consistency as compared to the random policy, with smaller standard deviation across the ten different bead configurations.
more »
« less
- Award ID(s):
- 2142773
- PAR ID:
- 10497698
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- SN Computer Science
- Volume:
- 5
- Issue:
- 4
- ISSN:
- 2661-8907
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with self-supervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects.more » « less
-
We present a filtering-based method for semantic mapping to simultaneously detect objects and localize their 6 degree-of-freedom pose. For our method, called Contextual Temporal Mapping (or CT-Map), we represent the semantic map as a belief over object classes and poses across an observed scene. Inference for the semantic mapping problem is then modeled in the form of a Conditional Random Field (CRF). CT-Map is a CRF that considers two forms of relationship potentials to account for contextual relations between objects and temporal consistency of object poses, as well as a measurement potential on observations. A particle filtering algorithm is then proposed to perform inference in the CT-Map model. We demonstrate the efficacy of the CT-Map method with a Michigan Progress Fetch robot equipped with a RGB-D sensor. Our results demonstrate that the particle filtering based inference of CT-Map provides improved object detection and pose estimation with respect to baseline methods that treat observations as independent samples of a scene.more » « less
-
We propose a systematic application-specific hardware design methodology for designing Spiking Neural Network (SNN), SNNOpt, which consists of three novel phases: 1) an Olliver-Ricci-Curvature (ORC)-based architecture-aware network partitioning, 2) a reinforcement learning mapping strategy, and 3) a Bayesian optimization algorithm for NoC design space exploration. Experimental results show that SNNOpt achieves a 47.45% less runtime and 58.64% energy savings over state-of-the-art approaches.more » « less
-
Robotic surgical subtask automation has the potential to reduce the per-patient workload of human surgeons. There are a variety of surgical subtasks that require geometric information of subsurface anatomy, such as the location of tumors, which necessitates accurate and efficient surgical sensing. In this work, we propose an automated sensing method that maps 3D subsurface anatomy to provide such geometric knowledge. We model the anatomy via a Bayesian Hilbert map-based probabilistic 3D occupancy map. Using the 3D occupancy map, we plan sensing paths on the surface of the anatomy via a graph search algorithm, A * search, with a cost function that enables the trajectories generated to balance between exploration of unsensed regions and refining the existing probabilistic understanding. We demonstrate the performance of our proposed method by comparing it against 3 different methods in several anatomical environments including a real-life CT scan dataset. The experimental results show that our method efficiently detects relevant subsurface anatomy with shorter trajectories than the comparison methods, and the resulting occupancy map achieves high accuracy.more » « less
An official website of the United States government
