This paper discusses a novel approach for the exploration of an underwater structure. A team of robots splits into two roles: certain robots approach the structure collecting detailed information (proximal observers) while the rest (distal observers) keep a distance providing an overview of the mission and assist in the localization of the proximal observers via a Cooperative Localization framework. Proximal observers utilize a novel robust switching model-based/visual-inertial odometry to overcome vision-based localization failures. Exploration strategies for the proximal and the distal observer are discussed.
more »
« less
Visual mode switching learned through repeated adaptation to color
When the environment changes, vision adapts to maintain accurate perception. For repeatedly encountered environments, learning to adjust more rapidly would be beneficial, but past work remains inconclusive. We tested if the visual system can learn such visual mode switching for a strongly color-tinted environment, where adaptation causes the dominant hue to fade over time. Eleven observers wore bright red glasses for five 1-hr periods per day, for 5 days. Color adaptation was measured by asking observers to identify ‘unique yellow’, appearing neither reddish nor greenish. As expected, the world appeared less and less reddish during the 1-hr periods of glasses wear. Critically, across days the world also appeared significantly less reddish immediately upon donning the glasses. These results indicate that the visual system learned to rapidly adjust to the reddish environment, switching modes to stabilize color vision. Mode switching likely provides a general strategy to optimize perceptual processes.
more »
« less
- Award ID(s):
- 1558308
- PAR ID:
- 10319942
- Date Published:
- Journal Name:
- eLife
- Volume:
- 9
- ISSN:
- 2050-084X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The visual system adapts to the environment, changing neural responses to aid efficiency and improve perception. However, these changes sometimes lead to negative consequences: If neurons at later processing stages fail to account for adaptation at earlier stages, perceptual errors result, including common visual illusions. These negative effects of adaptation have been termed the coding catastrophe. How does the visual system resolve them? We hypothesized that higher-level adaptation can correct errors arising from the coding catastrophe by changing what appears normal, a common form of adaptation across domains. Observers ( N = 15) viewed flickering checkerboards that caused a normal face to appear distorted. We tested whether the visual system can adapt to this adaptation-distorted face through repeated viewing. Results from two experiments show that such meta-adaptation does occur and that it makes the distorted face gradually appear more normal. Meta-adaptation may be a general strategy to correct negative consequences of low-level adaptation.more » « less
-
null (Ed.)Light-on-dark color schemes, so-called “Dark Mode,” are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments. In this article, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic “additive” light model implies that bright graphics are visible but dark graphics are transparent . We describe two human-subject studies in which we evaluated a normal and inverted color mode in front of different physical backgrounds and different lighting conditions. Our results indicate that dark mode graphics displayed on the HoloLens have significant benefits for visual acuity and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications.more » « less
-
Abstract Human beings subjectively experience a rich visual percept. However, when behavioral experiments probe the details of that percept, observers perform poorly, suggesting that vision is impoverished. What can explain this awareness puzzle? Is the rich percept a mere illusion? How does vision work as well as it does? This paper argues for two important pieces of the solution. First, peripheral vision encodes its inputs using a scheme that preserves a great deal of useful information, while losing the information necessary to perform certain tasks. The tasks rendered difficult by the peripheral encoding include many of those used to probe the details of visual experience. Second, many tasks used to probe attentional and working memory limits are, arguably, inherently difficult, and poor performance on these tasks may indicate limits on decision complexity. Two assumptions are critical to making sense of this hypothesis: (1) All visual perception, conscious or not, results from performing some visual task; and (2) all visual tasks face the same limit on decision complexity. Together, peripheral encoding plus decision complexity can explain a wide variety of phenomena, including vision’s marvelous successes, its quirky failures, and our rich subjective impression of the visual world.more » « less
-
IEEE (Ed.)This paper addresses the robustness problem of visual-inertial state estimation for underwater operations. Underwater robots operating in a challenging environment are required to know their pose at all times. All vision-based localization schemes are prone to failure due to poor visibility conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot's kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times. Health-monitoring tracks the VIO process ensuring timely switches between the two estimators. Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over coral reefs and a shipwreck.more » « less
An official website of the United States government

