Tactile sensing has been increasingly utilized in robot
control of unknown objects to infer physical properties and
optimize manipulation. However, there is limited understanding
about the contribution of different sensory modalities during
interactive perception in complex interaction both in robots
and in humans. This study investigated the effect of visual and
haptic information on humans’ exploratory interactions with
a ‘cup of coffee’, an object with nonlinear internal dynamics.
Subjects were instructed to rhythmically transport a virtual
cup with a rolling ball inside between two targets at a specified
frequency, using a robotic interface. The cup and targets were
displayed on a screen, and force feedback from the cup-andball
dynamics was provided via the robotic manipulandum.
Subjects were encouraged to explore and prepare the dynamics
by “shaking” the cup-and-ball system to find the best initial
conditions prior to the task. Two groups of subjects received the
full haptic feedback about the cup-and-ball movement during
the task; however, for one group the ball movement was visually
occluded. Visual information about the ball movement had two
distinctive effects on the performance: it reduced preparation
time needed to understand the dynamics and, importantly, it
led to simpler, more linear input-output interactions between
hand and object. The results highlight how visual and haptic information
regarding nonlinear internal dynamics have distinct
roles for the interactive perception of complex objects.
more »
« less
Teaching Cameras to Feel: Estimating Tactile Physical Properties of Surfaces from Images
The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we introduce a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property.
more »
« less
- Award ID(s):
- 1715195
- NSF-PAR ID:
- 10292301
- Date Published:
- Journal Name:
- European Conference on Computer Vision
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Conventional Intelligent Virtual Agents (IVAs) focus primarily on the visual and auditory channels for both the agent and the interacting human: the agent displays a visual appearance and speech as output, while processing the human’s verbal and non-verbal behavior as input. However, some interactions, particularly those between a patient and healthcare provider, inherently include tactile components.We introduce an Intelligent Physical-Virtual Agent (IPVA) head that occupies an appropriate physical volume; can be touched; and via human-in-the-loop control can change appearance, listen, speak, and react physiologically in response to human behavior. Compared to a traditional IVA, it provides a physical affordance, allowing for more realistic and compelling human-agent interactions. In a user study focusing on neurological assessment of a simulated patient showing stroke symptoms, we compared the IPVA head with a high-fidelity touch-aware mannequin that has a static appearance. Various measures of the human subjects indicated greater attention, affinity for, and presence with the IPVA patient, all factors that can improve healthcare training.more » « less
-
The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot’s ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator’s world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework’s ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot’s estimate of object properties.more » « less
-
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren’t well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.more » « less
-
null (Ed.)Understanding the properties of dust attenuation curves in galaxies and the physical mechanisms that shape them are among the fundamental questions of extragalactic astrophysics, with great practical significance for deriving the physical properties of galaxies. Attenuation curves result from a combination of dust grain properties, dust content, and the spatial arrangement of dust and different populations of stars. In this review, we assess the state of the field, paying particular attention to extinction curves as the building blocks of attenuation laws. We introduce a quantitative framework to characterize extinction and attenuation curves, present a theoretical foundation for interpreting empirical results, overview an array of observational methods, and review observational results at low and high redshifts. Our main conclusions include the following: ▪ Attenuation curves exhibit a wide range of UV-through-optical slopes, from curves with shallow (Milky Way–like) slopes to those exceeding the slope of the Small Magellanic Cloud extinction curve. ▪ The slopes of the curves correlate strongly with the effective optical opacities, in the sense that galaxies with lower dust column density (lower visual attenuation) tend to have steeper slopes, whereas the galaxies with higher dust column density have shallower (grayer) slopes. ▪ Galaxies exhibit a range of 2175-Å UV bump strengths, including no bump, but, on average, are suppressed compared with the average Milky Way extinction curve. ▪ Theoretical studies indicate that both the correlation between the slope and the dust column as well as variations in bump strength may result from geometric and radiative transfer effects.more » « less