skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: How small changes to one eye’s retinal image can transform the perceived shape of a very familiar object
Vision can provide useful cues about the geometric properties of an object, like its size, distance, pose, and shape. But how the brain merges these properties into a complete sensory representation of a three-dimensional object is poorly understood. To address this gap, we investigated a visual illusion in which humans misperceive the shape of an object due to a small change in one eye’s retinal image. We first show that this illusion affects percepts of a highly familiar object under completely natural viewing conditions. Specifically, people perceived their own rectangular mobile phone to have a trapezoidal shape. We then investigate the perceptual underpinnings of this illusion by asking people to report both the perceived shape and pose of controlled stimuli. Our results suggest that the shape illusion results from distorted cues to object pose. In addition to yielding insights into object perception, this work informs our understanding of how the brain combines information from multiple visual cues in natural settings. The shape illusion can occur when people wear everyday prescription spectacles; thus, these findings also provide insight into the cue combination challenges that some spectacle wearers experience on a regular basis.  more » « less
Award ID(s):
2041726
PAR ID:
10568292
Author(s) / Creator(s):
; ;
Publisher / Repository:
Proceedings of the National Academy of Sciences
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
121
Issue:
17
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Haptic illusions provide unique insights into how we model our bodies separate from our environment. Popular illusions like the rubber-hand illusion and mirror-box illusion have demonstrated that we can adapt the internal representations of our limbs in response to visuo-haptic conflicts. In this manuscript, we extend this knowledge by investigating to what extent, if any, we also augment our external representations of the environment and its action on our bodies in response to visuo-haptic conflicts. Utilizing a mirror and a robotic brushstroking platform, we create a novel illusory paradigm that presents a visuo-haptic conflict using congruent and incongruent tactile stimuli applied to participants' fingers. Overall, we observed that participants perceived an illusory tactile sensation on their visually occluded finger when seeing a visual stimulus that was inconsistent with the actual tactile stimulus provided. We also found residual effects of the illusion after the conflict was removed. These findings highlight how our need to maintain a coherent internal representation of our body extends to our model of our environment. 
    more » « less
  2. Detecting and avoiding obstacles while navigating can pose a challenge for people with low vision, but augmented reality (AR) has the potential to assist by enhancing obstacle visibility. Perceptual and user experience research is needed to understand how to craft effective AR visuals for this purpose. We developed a prototype AR application capable of displaying multiple kinds of visual cues for obstacles on an optical see-through head-mounted display. We assessed the usability of these cues via a study in which participants with low vision navigated an obstacle course. The results suggest that 3D world-locked AR cues were superior to directional heads-up cues for most participants during this activity. 
    more » « less
  3. Social touch is a common method of communication between individuals, but touch cues alone provide only a glimpse of the entire interaction. Visual and auditory cues are also present in these interactions, and increase the expressiveness and recognition of the conveyed information. However, most mediated touch interactions have focused on providing only haptic cues to the user. Our research addresses this gap by adding visual cues to a mediated social touch interaction through an array of LEDs attached to a wearable device. This device consists of an array of voice-coil actuators that present normal force to the user’s forearm to recreate the sensation of social touch gestures. We conducted a human subject study (N = 20) to determine the relative importance of the touch and visual cues. Our results demonstrate that visual cues, particularly color and pattern, significantly enhance perceived realism, as well as alter perceived touch intensity, valence, and dominance of the mediated social touch. These results illustrate the importance of closely integrating multisensory cues to create more expressive and realistic virtual interactions. 
    more » « less
  4. Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)
    Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: https://topk-shape-bias.github.io/ 
    more » « less
  5. The human-robot interaction (HRI) field has rec- ognized the importance of enabling robots to interact with teams. Human teams rely on effective communication for suc- cessful collaboration in time-sensitive environments. Robots can play a role in enhancing team coordination through real-time assistance. Despite significant progress in human-robot teaming research, there remains an essential gap in how robots can effectively communicate with action teams using multimodal interaction cues in time-sensitive environments. This study addresses this knowledge gap in an experimental in-lab study to investigate how multimodal robot communication in action teams affects workload and human perception of robots. We explore team collaboration in a medical training scenario where a robotic crash cart (RCC) provides verbal and non-verbal cues to help users remember to perform iterative tasks and search for supplies. Our findings show that verbal cues for object search tasks and visual cues for task reminders reduce team workload and increase perceived ease of use and perceived usefulness more effectively than a robot with no feedback. Our work contributes to multimodal interaction research in the HRI field, highlighting the need for more human-robot teaming research to understand best practices for integrating collaborative robots in time-sensitive environments such as in hospitals, search and rescue, and manufacturing applications. 
    more » « less