skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Unchanging visions: the effects and limitations of ocular stillness
Scientists have pondered the perceptual effects of ocular motion, and those of its counterpart, ocular stillness, for over 200 years. The unremitting ‘trembling of the eye’ that occurs even during gaze fixation was first noted by Jurin in 1738. In 1794, Erasmus Darwin documented that gaze fixation produces perceptual fading, a phenomenon rediscovered in 1804 by Ignaz Paul Vital Troxler. Studies in the twentieth century established that Jurin's ‘eye trembling’ consisted of three main types of ‘fixational’ eye movements, now called microsaccades (or fixational saccades), drifts and tremor. Yet, owing to the constant and minute nature of these motions, the study of their perceptual and physiological consequences has met significant technological challenges. Studies starting in the 1950s and continuing in the present have attempted to study vision during retinal stabilization—a technique that consists on shifting any and all visual stimuli presented to the eye in such a way as to nullify all concurrent eye movements—providing a tantalizing glimpse of vision in the absence of change. No research to date has achieved perfect retinal stabilization, however, and so other work has devised substitute ways to counteract eye motion, such as by studying the perception of afterimages or of the entoptic images formed by retinal vessels, which are completely stable with respect to the eye. Yet other research has taken the alternative tack to control eye motion by behavioural instruction to fix one's gaze or to keep one's gaze still, during concurrent physiological and/or psychophysical measurements. Here, we review the existing data—from historical and contemporary studies that have aimed to nullify or minimize eye motion—on the perceptual and physiological consequences of perfect versus imperfect fixation. We also discuss the accuracy, quality and stability of ocular fixation, and the bottom–up and top–down influences that affect fixation behaviour. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’.  more » « less
Award ID(s):
1523614
PAR ID:
10429897
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Philosophical Transactions of the Royal Society B: Biological Sciences
Volume:
372
Issue:
1718
ISSN:
0962-8436
Page Range / eLocation ID:
20160204
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforward pass. In this study, we designed a dual-stream vision model inspired by the human brain. This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation. Trained on image recognition, this model examines an image through a sequence of fixations, each time focusing on different parts, thereby progressively building a representation of the image. We evaluated this model against various benchmarks in terms of object recognition, gaze behavior, and adversarial robustness. Our findings suggest that the model can attend and gaze in ways similar to humans without being explicitly trained to mimic human attention and that the model can enhance robustness against adversarial attacks due to its retinal sampling and recurrent processing. In particular, the model can correct its perceptual errors by taking more glances, setting itself apart from all feedforward-only models. In conclusion, the interactions of retinal sampling, eye movement, and recurrent dynamics are important to human-like visual exploration and inference. 
    more » « less
  2. Abstract Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic-flow signals. The resulting perception of a stationary visual environment, referred to herein as stationarity perception, depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the functioning of these mechanisms and their dependence on fixation behavior as well as on the activeversuspassive nature of the head movement. Stationarity perception was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Fitting a psychometric function to the data yields two key parameters of performance. The mean is a measure of accuracy, and the standard deviation is a measure of precision. Experiments were conducted using a head-mounted display with fixation behavior monitored by an embedded eye tracker. During active conditions, subjects rotated their heads in yaw ∼15 deg/s over ∼1 s. Each subject’s movements were recorded and played backviarotating chair during the passive condition. During head-fixed and scene-fixed fixation the fixation target moved with the head or scene, respectively. Both precision and accuracy were better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Performance was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the retinal image motion estimate. These results reveal how the nature of head and eye movements mediate encoding, processing, and comparison of relevant sensory and motor signals. 
    more » « less
  3. Instant access to personal data is a double-edged sword and it has transformed society. It enhances convenience and interpersonal interactions through social media, while also making us all more vulnerable to identity theft and cybercrime. The need for hack-resistant biometric authentication is greater than ever. Previous studies have demonstrated that eye movements differ between individuals, so the characterization eye movements might provide a highly secure and convenient approach to personal identification, because eye movements are generated by the owner’s living brain in real-time and are therefore extremely difficult to imitate by hackers. To study the potential of eye movements as a biometric tool, we characterized the eye movements of 18 participants. We examined an entire battery of oculomotor behaviors, including the unconscious eye movements that occur during ocular fixation; this resulted in a high precision oculomotor signature that can identify individuals. We show that one-versus-one machine learning classification, applied with a nearest neighbor statistic, yielded an accuracy of >99% based with ~25minute sessions, during which participants executed fixations, visual pursuits, free viewing of images, etc. Even if we just examine the ~3 minutes in which participants executed the fixation task by itself, discrimination accuracy was higher than 96%. When we further split the fixation data randomly into 30 sec chunks, we obtained a remarkably high accuracy of 92%. Because eye-trackers provide improved spatial and temporal resolution with each new generation, we expect that both accuracy and the minimum sample duration necessary for reliable oculomotor biometric verification can be further optimized. 
    more » « less
  4. Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)
    The human visual system uses two parallel pathways for spatial processing and object recognition. In contrast, computer vision systems tend to use a single feedforward pathway, rendering them less robust, adaptive, or efficient than human vision. To bridge this gap, we developed a dual-stream vision model inspired by the human eyes and brain. At the input level, the model samples two complementary visual patterns to mimic how the human eyes use magnocellular and parvocellular retinal ganglion cells to separate retinal inputs to the brain. At the backend, the model processes the separate input patterns through two branches of convolutional neural networks (CNN) to mimic how the human brain uses the dorsal and ventral cortical pathways for parallel visual processing. The first branch (WhereCNN) samples a global view to learn spatial attention and control eye movements. The second branch (WhatCNN) samples a local view to represent the object around the fixation. Over time, the two branches interact recurrently to build a scene representation from moving fixations. We compared this model with the human brains processing the same movie and evaluated their functional alignment by linear transformation. The WhereCNN and WhatCNN branches were found to differentially match the dorsal and ventral pathways of the visual cortex, respectively, primarily due to their different learning objectives, rather than their distinctions in retinal sampling or sensitivity to attention-driven eye movements. These model-based results lead us to speculate that the distinct responses and representations of the ventral and dorsal streams are more influenced by their distinct goals in visual attention and object recognition than by their specific bias or selectivity in retinal inputs. This dual-stream model takes a further step in brain-inspired computer vision, enabling parallel neural networks to actively explore and understand the visual surroundings. 
    more » « less
  5. Abstract Light intensity varies 1 million‐fold between night and day, driving the evolution of eye morphology and retinal physiology. Despite extensive research across taxa showing anatomical adaptations to light niches, surprisingly few empirical studies have quantified the relationship between such traits and the physiological sensitivity to light. In this study, we employ a comparative approach in frogs to determine the physiological sensitivity of eyes in two nocturnal (Rana pipiens,Hyla cinerea) and two diurnal species (Oophaga pumilio,Mantella viridis), examining whether differences in retinal thresholds can be explained by ocular and cellular anatomy. Scotopic electroretinogram (ERG) analysis of relative b‐wave amplitude reveals 10‐ to 100‐fold greater light sensitivity in nocturnal compared to diurnal frogs. Ocular and cellular optics (aperture, focal length, and rod outer segment dimensions) were assessed via the Land equation to quantify differences in optical sensitivity. Variance in retinal thresholds was overwhelmingly explained by Land equation solutions, which describe the optical sensitivity of single rods. Thus, at the b‐wave, stimulus‐response thresholds may be unaffected by photoreceptor convergence (which create larger, combined collecting areas). Follow‐up experiments were conducted using photopic ERGs, which reflect cone vision. Under these conditions, the relative difference in thresholds was reversed, such that diurnal species were more sensitive than nocturnal species. Thus, photopic data suggest that rod‐specific adaptations, not ocular anatomy (e.g., aperture and focal distance), drive scotopic thresholds differences. To the best of our knowledge, these data provide the first quantified relationship between optical and physiological sensitivity in vertebrates active in different light regimes. 
    more » « less