Some locomotor tasks involve steering at high speeds through multiple waypoints within cluttered environments. Although in principle actors could treat each individual waypoint in isolation, skillful performance would seem to require them to adapt their trajectory to the most immediate waypoint in anticipation of subsequent waypoints. To date, there have been few studies of such behavior, and the evidence that does exist is inconclusive about whether steering is affected by multiple future waypoints. The present study was designed to address the need for a clearer understanding of how humans adapt their steering movements in anticipation of future goals. Subjects performed a simulated drone flying task in a forest-like virtual environment that was presented on a monitor while their eye movements were tracked. They were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., lookahead distance) was manipulated between trials. When gates became visible at least 1-1/2 segments in advance, subjects successfully flew through a high percentage of gates, rarely collided with obstacles, and maintained a consistent speed. They also approached the most immediate gate in a way that depended on the angular position of the subsequent gate. However, when the lookahead distance was less than 1-1/2 segments, subjects followed longer paths and flew at slower, more variable speeds. The findings demonstrate that the control of steering through multiple waypoints does indeed depend on information from beyond the most immediate waypoint. Discussion focuses on the possible control strategies for steering through multiple waypoints.
more »
« less
Coordination of gaze and action during high-speed steering and obstacle avoidance
When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study’s broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.
more »
« less
- Award ID(s):
- 2218220
- PAR ID:
- 10528735
- Editor(s):
- Zhang, Lei
- Publisher / Repository:
- Public Library of Science (PLOS)
- Date Published:
- Journal Name:
- PLOS ONE
- Volume:
- 19
- Issue:
- 3
- ISSN:
- 1932-6203
- Page Range / eLocation ID:
- e0289855
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Synopsis Shark skin is a composite of mineralized dermal denticles embedded in an internal collagen fiber network and is sexually dimorphic. Female shark skin is thicker, has greater denticle density and denticle overlap compared to male shark skin, and denticle morphology differs between sexes. The skin behaves with mechanical anisotropy, extending farther when tested along the longitudinal (anteroposterior) axis but increasing in stiffness along the hoop (dorsoventral or circumferential) axis. As a result, shark skin has been hypothesized to function as an exotendon. This study aims to quantify sex differences in the mechanical properties and morphology of shark skin. We tested skin from two immature male and two immature female sharks from three species (bonnethead shark, Sphyrna tiburo; bull shark, Carcharhinus leucas; silky shark, Carcharhinus falciformis) along two orientations (longitudinal and hoop) in uniaxial tension with an Instron E1000 at a 2 mm s−1 strain rate. We found that male shark skin was significantly tougher than female skin, although females had significantly greater skin thickness compared to males. We found skin in the hoop direction was significantly stiffer than the longitudinal direction across sexes and species, while skin in the longitudinal direction was significantly more extensible than in the hoop direction. We found that shark skin mechanical behavior was impacted by sex, species, and direction, and related to morphological features of the skin.more » « less
-
Key pointsVision plays a crucial role in guiding locomotion in complex environments, but the coordination between gaze and stride is not well understood.The coordination of gaze shifts, fixations, constant gaze and slow gaze with strides in cats walking on different surfaces were examined.It was found that gaze behaviours are coordinated with strides even when walking on a flat surface in the complete darkness, occurring in a sequential order during different phases of the stride.During walking on complex surfaces, gaze behaviours are typically more tightly coordinated with strides, particularly at faster speeds, only slightly shifting in phase.These findings indicate that the coordination of gaze behaviours with strides is not vision‐driven, but is a part of the whole body locomotion synergy; the visual environment and locomotor task modulate it. The results may be relevant to developing diagnostic tools and rehabilitation approaches for patients with locomotor deficits. AbstractVision plays a crucial role in guiding locomotion in complex environments. However, the coordination between the gaze and stride is not well understood. We investigated this coordination in cats walking on a flat surface in darkness or light, along a horizontal ladder and on a pathway with small stones. We recorded vertical and horizontal eye movements and 3‐D head movement, and calculated where gaze intersected the walkway. The coordination of gaze shifts away from the animal, gaze shifts toward, fixations, constant gaze, and slow gaze with strides was investigated. We found that even during walking on the flat surface in the darkness, all gaze behaviours were coordinated with strides. Gaze shifts and slow gaze toward started in the beginning of each forelimb's swing and ended in its second half. Fixations peaked throughout the beginning and middle of swing. Gaze shifts away began throughout the second half of swing of each forelimb and ended when both forelimbs were in stance. Constant gaze and slow gaze away occurred in the beginning of stance. However, not every behaviour occurred during every stride. Light had a small effect. The ladder and stones typically increased the coordination and caused gaze behaviours to occur 3% earlier in the cycle. At faster speeds, the coordination was often tighter and some gaze behaviours occurred 2–16% later in the cycle. The findings indicate that the coordination of gaze with strides is not vision‐driven, but is a part of the whole body locomotion synergy; the visual environment and locomotor task modulate it.more » « less
-
Abstract Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.more » « less
-
null (Ed.)Abstract In this paper, results for SS316 L microtube experiments under combined inflation and axial loading for single and multiloading segment deformation paths are presented along with a plasticity model to predict the associated stress and strain paths. The microtube inflation/tension machine, utilized for these experiments, creates biaxial stress states by applying axial tension or compression and internal pressure simultaneously. Two types of loading paths are considered in this paper, proportional (where a single loading path with a given axial:hoop stress ratio is followed) and corner (where an initial pure loading segment, i.e., axial or hoop, is followed by a secondary loading segment in the transverse direction, i.e., either hoop or axial, respectively). The experiments are designed to produce the same final strain state under different deformation paths, resulting in different final stress states. This difference in stress state can affect the material properties of the final part, which can be varied for the intended application, e.g., biomedical hardware, while maintaining the desired geometry. The experiments are replicated in a reasonable way by a material model that combines the Hill 1948 anisotropic yield function and the Hockett–Sherby hardening law. Discussion of the grain size effects during microforming impacting the ability to achieve consistent deformation path results is included.more » « less
An official website of the United States government

