The brain estimates hand position using vision and position sense (proprioception). The relationship between visual and proprioceptive estimates is somewhat flexible: visual information about the index finger can be spatially displaced from proprioceptive information, resulting in cross-sensory recalibration of the visual and proprioceptive unimodal position estimates. According to the causal inference framework, recalibration occurs when the unimodal estimates are attributed to a common cause and integrated. If separate causes are perceived, then recalibration should be reduced. Here we assessed visuo-proprioceptive recalibration in response to a gradual visuo-proprioceptive mismatch at the left index fingertip. Experiment 1 asked how frequently a 70 mm mismatch is consciously perceived compared to when no mismatch is present, and whether awareness is linked to reduced visuo-proprioceptive recalibration, consistent with causal inference predictions. However, conscious offset awareness occurred rarely. Experiment 2 tested a larger displacement, 140 mm, and asked participants about their perception more frequently, including at 70 mm. Experiment 3 confirmed that participants were unbiased at estimating distances in the 2D virtual reality display. Results suggest that conscious awareness of the mismatch was indeed linked to reduced cross-sensory recalibration as predicted by the causal inference framework, but this was clear only at higher mismatch magnitudes (70–140 mm). At smaller offsets (up to 70 mm), conscious perception of an offset may not override unconscious belief in a common cause, perhaps because the perceived offset magnitude is in range of participants’ natural sensory biases. These findings highlight the interaction of conscious awareness with multisensory processes in hand perception.
more »
« less
Empirically Evaluating the Effects of Perceptual Information Channels on the Size Perception of Tangibles in Near-Field Virtual Reality
Immersive Virtual Environments (IVEs) incorporating tangibles are becoming more accessible. The success of applications combining 3D printed tangibles and VR often depends on how accurately size is perceived. Research has shown that visuo-haptic perceptual information is important in the perception of size. However, it is unclear how these sensory-perceptual channels are affected by immersive virtual environments that incorporate tangible objects. Towards understanding the effects of different sensory information channels in the near field size perception of tangibles of graspable sizes in IVEs, we conducted a between-subjects study evaluating the accuracy of size perception across three experimental conditions (Vision-only, Haptics-only, Vision and Haptics). We found that overall, participants consistently over-estimated the size of the dials regardless of the type of perceptual information that was presented. Participants in the haptics only condition overestimated diameters to a larger degree as compared to other conditions. Participants were most accurate in the vision only condition and least accurate in the haptics only condition. Our results also revealed that increased efficiency in reporting size over time was most pronounced in the visuo- haptic condition.
more »
« less
- Award ID(s):
- 1828611
- PAR ID:
- 10303757
- Date Published:
- Journal Name:
- IEEE VR 2021
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants’ perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.more » « less
-
The perception of distance is a complex process that often involves sensory information beyond that of just vision. In this work, we investigated if depth perception based on auditory information can be calibrated, a process by which perceptual accuracy of depth judgments can be improved by providing feedback and then performing corrective actions. We further investigated if perceptual learning through carryover effects of calibration occurs in different levels of a virtual environment’s visibility based on different levels of virtual lighting. Users performed an auditory depth judgment task over several trials in which they walked where they perceived an aural sound to be, yielding absolute estimates of perceived distance. This task was performed in three sequential phases: pretest, calibration, posttest. Feedback on the perceptual accuracy of distance estimates was only provided in the calibration phase, allowing to study the calibration of auditory depth perception. We employed a 2 (Visibility of virtual environment) ×3 (Phase) ×5 (Target Distance) multi-factorial design, manipulating the phase and target distance as within-subjects factors, and the visibility of the virtual environment as a between-subjects factor. Our results revealed that users generally tend to underestimate aurally perceived distances in VR similar to the distance compression effects that commonly occur in visual distance perception in VR. We found that auditory depth estimates, obtained using an absolute measure, can be calibrated to become more accurate through feedback and corrective action. In terms of environment visibility, we find that environments visible enough to reveal their extent may contain visual information that users attune to in scaling aurally perceived depth.more » « less
-
Haptics devices have been developed in a wide range of form factors, actuation methods, and degrees of freedom, often with the goal of communicating information. While work has investigated the maximum rate and quantity of information that can be transferred through haptics, these measures often do not inform how humans will use the devices. In this work, we measure the differences between perception and use as it relates to signal complexity. Using an inflatable soft haptic display with four independently actuated pouches, we provide navigation directions to participants. The haptic device operates in three modalities, in increasing order of signal complexity: Cardinal, Ordinal, and Continuous. We first measure participants’ accuracy in perceiving continuous signals generated by the device, showing average errors below 5 deg. Participants then used the haptic device in each operating mode to guide an object towards a target in a 2-dimensional plane. Our results indicate that human’s use of haptic signals often lags significantly behind the displayed signal and is less accurate than their static perception. Additionally signal complexity was correlated with path efficiency but inversely correlated with movement speed, showing that even simple design changes create complex tradeoffs.more » « less
-
Haptic devices enable communication via touch, augmenting visual and auditory displays, or by offering alternative channels of communication when vision and hearing are unavailable. Because of the different types of haptic stimuli that are perceivable by users — vibration, skin stretch, pressure and temperature, among others — devices can be designed to communicate complex information by delivering multiple types of haptic stimuli simultaneously. These multi-sensory haptic devices are often designed to be wearable and have been developed for use in a wide variety of applications, including communication, entertainment and rehabilitation. Multi-sensory haptic devices present unique challenges to designers because human perceptual acuity can vary widely depending on the wearable location on the body and/or the heterogeneity in human perceptual performance, particularly when multiple cues are presented simultaneously. Additionally, packaging haptic systems in a wearable form factor presents its own engineering challenges such as cue masking, device mounting and actuator capabilities, among others. Thus, in this Review, we discuss the state-of-the-art and specific obstacles present in the field to produce multi-sensory devices that enhance the human capacity for haptic interaction and information transmission.more » « less
An official website of the United States government

