skip to main content

Title: Judgments of Object Size and Distance across Different Virtual Reality Environments: A Preliminary Study
Emerging technologies offer the potential to expand the domain of the future workforce to extreme environments, such as outer space and alien terrains. To understand how humans navigate in such environments that lack familiar spatial cues this study examined spatial perception in three types of environments. The environments were simulated using virtual reality. We examined participants’ ability to estimate the size and distance of stimuli under conditions of minimal, moderate, or maximum visual cues, corresponding to an environment simulating outer space, an alien terrain, or a typical cityscape, respectively. The findings show underestimation of distance in both the maximum and the minimum visual cue environment but a tendency for overestimation of distance in the moderate environment. We further observed that depth estimation was substantially better in the minimum environment than in the other two environments. However, estimation of height was more accurate in the environment with maximum cues (cityscape) than the environment with minimum cues (outer space). More generally, our results suggest that familiar visual cues facilitated better estimation of size and distance than unfamiliar cues. In fact, the presence of unfamiliar, and perhaps misleading visual cues (characterizing the alien terrain environment), was more disruptive than an environment with a more » total absence of visual cues for distance and size perception. The findings have implications for training workers to better adapt to extreme environments. « less
Authors:
; ; ; ;
Award ID(s):
1928695
Publication Date:
NSF-PAR ID:
10356221
Journal Name:
Applied Sciences
Volume:
11
Issue:
23
Page Range or eLocation-ID:
11510
ISSN:
2076-3417
Sponsoring Org:
National Science Foundation
More Like this
  1. Motor learning in visuomotor adaptation tasks results from both explicit and implicit processes, each responding differently to an error signal. Although the motor output side of these processes has been extensively studied, the visual input side is relatively unknown. We investigated if and how depth perception affects the computation of error information by explicit and implicit motor learning. Two groups of participants made reaching movements to bring a virtual cursor to a target in the frontoparallel plane. The Delayed group was allowed to reaim and their feedback was delayed to emphasize explicit learning, whereas the camped group received task-irrelevant clamped cursor feedback and continued to aim straight at the target to emphasize implicit adaptation. Both groups played this game in a highly detailed virtual environment (depth condition), leveraging a cover task of playing darts in a virtual tavern, and in an empty environment (no-depth condition). The delayed group showed an increase in error sensitivity under depth relative to no-depth. In contrast, the clamped group adapted to the same degree under both conditions. The movement kinematics of the delayed participants also changed under the depth condition, consistent with the target appearing more distant, unlike the Clamped group. A comparison of themore »delayed behavioral data with a perceptual task from the same individuals showed that the greater reaiming in the depth condition was consistent with an increase in the scaling of the error distance and size. These findings suggest that explicit and implicit learning processes may rely on different sources of perceptual information. NEW & NOTEWORTHY We leveraged a classic sensorimotor adaptation task to perform a first systematic assessment of the role of perceptual cues in the estimation of an error signal in the 3-D space during motor learning. We crossed two conditions presenting different amounts of depth information, with two manipulations emphasizing explicit and implicit learning processes. Explicit learning responded to the visual conditions, consistent with perceptual reports, whereas implicit learning appeared to be independent of them.« less
  2. We consider the problem of designing sublinear time algorithms for estimating the cost of minimum] metric traveling salesman (TSP) tour. Specifically, given access to a n × n distance matrix D that specifies pairwise distances between n points, the goal is to estimate the TSP cost by performing only sublinear (in the size of D) queries. For the closely related problem of estimating the weight of a metric minimum spanning tree (MST), it is known that for any epsilon > 0, there exists an O^~(n/epsilon^O(1))-time algorithm that returns a (1+epsilon)-approximate estimate of the MST cost. This result immediately implies an O^~(n/epsilon^O(1)) time algorithm to estimate the TSP cost to within a (2 + epsilon) factor for any epsilon > 0. However, no o(n^2)-time algorithms are known to approximate metric TSP to a factor that is strictly better than 2. On the other hand, there were also no known barriers that rule out existence of (1 + epsilon)-approximate estimation algorithms for metric TSP with O^~ (n) time for any fixed epsilon > 0. In this paper, we make progress on both algorithms and lower bounds for estimating metric TSP cost. On the algorithmic side, we first consider the graphic TSP problemmore »where the metric D corresponds to shortest path distances in a connected unweighted undirected graph. We show that there exists an O^~(n) time algorithm that estimates the cost of graphic TSP to within a factor of (2 − epsilon_0) for some epsilon_0 > 0. This is the first sublinear cost estimation algorithm for graphic TSP that achieves an approximation factor less than 2. We also consider another well-studied special case of metric TSP, namely, (1, 2)-TSP where all distances are either 1 or 2, and give an O^~(n ^ 1.5) time algorithm to estimate optimal cost to within a factor of 1.625. Our estimation algorithms for graphic TSP as well as for (1, 2)-TSP naturally lend themselves to O^~(n) space streaming algorithms that give an 11/6-approximation for graphic TSP and a 1.625-approximation for (1, 2)-TSP. These results motivate the natural question if analogously to metric MST, for any epsilon > 0, (1 + epsilon)-approximate estimates can be obtained for graphic TSP and (1, 2)-TSP using O^~ (n) queries. We answer this question in the negative – there exists an epsilon_0 > 0, such that any algorithm that estimates the cost of graphic TSP ((1, 2)-TSP) to within a (1 + epsilon_0)-factor, necessarily requires (n^2) queries. This lower bound result highlights a sharp separation between the metric MST and metric TSP problems. Similarly to many classical approximation algorithms for TSP, our sublinear time estimation algorithms utilize subroutines for estimating the size of a maximum matching in the underlying graph. We show that this is not merely an artifact of our approach, and that for any epsilon > 0, any algorithm that estimates the cost of graphic TSP or (1, 2)-TSP to within a (1 + epsilon)-factor, can also be used to estimate the size of a maximum matching in a bipartite graph to within an epsilon n additive error. This connection allows us to translate known lower bounds for matching size estimation in various models to similar lower bounds for metric TSP cost estimation.« less
  3. Spatial language is often used metaphorically to describe other domains, including time (long sound) and pitch (high sound). How does experience with these metaphors shape the ability to associate space with other domains? Here, we tested 3- to 6-year-old English-speaking children and adults with a cross-domain matching task. We probed cross-domain relations that are expressed in English metaphors for time and pitch (length-time and height-pitch), as well as relations that are unconventional in English but expressed in other languages (size-time and thickness-pitch). Participants were tested with a perceptual matching task, in which they matched between spatial stimuli and sounds of different durations or pitches, and a linguistic matching task, in which they matched between a label denoting a spatial attribute, duration, or pitch, and a picture or sound representing another dimension. Contrary to previous claims that experience with linguistic metaphors is necessary for children to make cross-domain mappings, children performed above chance for both familiar and unfamiliar relations in both tasks, as did adults. Children’s performance was also better when a label was provided for one of the dimensions, but only when making length-time, size-time, and height-pitch mappings (not thickness-pitch mappings). These findings suggest that, although experience with metaphorical languagemore »is not necessary to make cross-domain mappings, labels can promote these mappings, both when they have familiar metaphorical uses (e.g., English ‘long’ denotes both length and duration), and when they describe dimensions that share a common ordinal reference frame (e.g., size and duration, but not thickness and pitch).« less
  4. Background Prey can alter their behavior when detecting predator cues. Little is known about which sensory channel, number of channels, or the interaction among channels that shrimp species use to evaluate the threat from predators. The amphidromous shrimp Xiphocaris elongata has an induced defense, an elongated rostrum, where predatory fishes are present. We sought to test if kairomones or visual cues when presented singly from fish either eating flakes or shrimp, had more effect on altering the temporal feeding and refuge use patterns of long-rostrum (LR) X. elongata . We were also interested in elucidating potential interactions among cues when presented simultaneously in different combinations (kairomones + visual + mechanosensory, kairomones + alarm + visual, kairomones + alarm, kairomones + visual) on the same response variables. We expected that when presented alone kairomones will significantly increase refuge use and decrease foraging, particularly late at night, in comparison to visual cues alone, and that multiple cues when presented simultaneously will further increase refuge use and decrease foraging at night. Methods We exposed shrimp to individual or multiple cues from the predatory fish mountain mullet, Augonostomus monticola . We examined shrimp behavior with respect to refuge use and foraging activity during fourmore »time periods (after sunset, nighttime, sunrise, and sunset) in a 24-hour period. Results Shrimp presented fish visual and chemical cues singly did not differ from one another but differed from control shrimp (no cues) with respect to refuge use or foraging. The number of shrimp using refuge in the treatment with most cues (KVM: kairomones+ visual + mechanosensory) was higher than in all the treatments with less cues. A significant decline in foraging was observed when multiple cues were presented simultaneously. The highest number of shrimp foraged one hour after sunset and at nighttime. A significant interaction was observed between cue treatments and time periods, with shrimp in the KVM treatment foraging less and using more refuge late at night and at sunrise than shrimp in other treatments or time periods. Conclusions The observation that fish chemical and visual cues when presented singly produced similar refuge use and foraging patterns was contrary to expectation and suggests that visual and chemical cues, when presented alone, provide redundant information to X. elongata with regards to predation threat. The significant increase in refuge use and reduction in foraging observed in the KVM treatment suggest multimodal signal enhancement in the perception of threat. This makes evolutionary sense in “noisy” environments, such as streams, where detection, localization, and intention of predators is much improved when cues are received through multiple sensory channels.« less
  5. Perceiving and manipulating 3D articulated objects (e.g., cabinets, doors) in human environments is an important yet challenging task for future home-assistant robots. The space of 3D articulated objects is exceptionally rich in their myriad semantic categories, diverse shape geometry, and complicated part functionality. Previous works mostly abstract kinematic structure with estimated joint parameters and part poses as the visual representations for manipulating 3D articulated objects. In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals. We design an interaction-for-perception framework VAT-Mart to learn such actionable visual representations by simultaneously training a curiosity-driven reinforcement learning policy exploring diverse interaction trajectories and a perception module summarizing and generalizing the explored knowledge for pointwise predictions among diverse shapes. Experiments prove the effectiveness of the proposed approach using the large-scale PartNet-Mobility dataset in SAPIEN environment and show promising generalization capabilities to novel test shapes, unseen object categories, and real-world data.