skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Prospective control of steering through multiple waypoints
Some locomotor tasks involve steering at high speeds through multiple waypoints within cluttered environments. Although in principle actors could treat each individual waypoint in isolation, skillful performance would seem to require them to adapt their trajectory to the most immediate waypoint in anticipation of subsequent waypoints. To date, there have been few studies of such behavior, and the evidence that does exist is inconclusive about whether steering is affected by multiple future waypoints. The present study was designed to address the need for a clearer understanding of how humans adapt their steering movements in anticipation of future goals. Subjects performed a simulated drone flying task in a forest-like virtual environment that was presented on a monitor while their eye movements were tracked. They were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., lookahead distance) was manipulated between trials. When gates became visible at least 1-1/2 segments in advance, subjects successfully flew through a high percentage of gates, rarely collided with obstacles, and maintained a consistent speed. They also approached the most immediate gate in a way that depended on the angular position of the subsequent gate. However, when the lookahead distance was less than 1-1/2 segments, subjects followed longer paths and flew at slower, more variable speeds. The findings demonstrate that the control of steering through multiple waypoints does indeed depend on information from beyond the most immediate waypoint. Discussion focuses on the possible control strategies for steering through multiple waypoints.  more » « less
Award ID(s):
2218220
PAR ID:
10616782
Author(s) / Creator(s):
;
Publisher / Repository:
Association for Research on Vision and Ophthalmology
Date Published:
Journal Name:
Journal of Vision
Volume:
24
Issue:
8
ISSN:
1534-7362
Page Range / eLocation ID:
1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Zhang, Lei (Ed.)
    When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study’s broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction. 
    more » « less
  2. Localization of radio-tagged wildlife is essential in environmental research and conservation. Recent advancements in Uncrewed Aerial Vehicles (UAVs) have expanded the potential for improving this process. However, a key challenge lies in the optimal choice of waypoints for UAVs to localize animals with high precision. This study addresses the intelligent selection of waypoints for UAVs assigned to localize multiple stationary Very High Frequency (VHF)-tagged wildlife simultaneously, with a primary emphasis on minimizing localization uncertainty in the shortest possible time. At each designated waypoint, the UAV obtains bearing measurements to tagged animals, considering the associated uncertainty. The algorithm then intelligently recommends subsequent locations that minimize predicted localization uncertainty while accounting for constraints related to mission time, keeping the UAV within signal range, and maintaining a suitable distance from targets to avoid disturbing the wildlife. The evaluation of the algorithm’s performance includes comprehensive assessments, featuring the analysis of uncertainty reduction throughout the mission, comparison of estimated animal locations with ground truth data, and analysis of mission time using Monte Carlo simulations. 
    more » « less
  3. This paper addresses the problem of generating a position trajectory with pointing direction constraints at given waypoints for underactuated unmanned vehicles. The problem is initially posed on the configuration space ℝ 3 × ℝ 2 and thereafter, upon suitable modifications, is re-posed as a problem on the Lie group SE(3). This is done by determining a vector orthogonal to the pointing direction and using it as the vehicle's thrust direction. This translates to converting reduced attitude constraints to full attitude constraints at the waypoints. For the position trajectory, in addition to position constraints, this modification adds acceleration constraints at the waypoints. For real-time implementation with low computational expenses, a linear-quadratic regulator (LQR) approach is adopted to determine the position trajectory with smoothness upto the fourth time derivative of position (snap). For the attitude trajectory, the thrust direction extracted from the position trajectory is used to first propagate the attitude to the subsequent waypoint and then correct it over time to achieve the desired attitude at this waypoint. Finally, numerical simulation results are presented to validate the trajectory generation scheme. 
    more » « less
  4. A framework for autonomous waypoint planning, trajectory generation through waypoints, and trajectory tracking for multi-rotor unmanned aerial vehicles (UAVs) is proposed in this work. Safe and effective operations of these UAVs is a problem that demands obstacle avoidance strategies and advanced trajectory planning and control schemes for stability and energy efficiency. To address this problem, a two-level optimization strategy is used for trajectory generation, then the trajectory is tracked in a stable manner. The framework given here consists of the following components: (a) a deep reinforcement learning (DRL)-based algorithm for optimal waypoint planning while minimizing control energy and avoiding obstacles in a given environment; (b) an optimal, smooth trajectory generation algorithm through waypoints, that minimizes a combinaton of velocity, acceleration, jerk and snap; and (c) a stable tracking control law that determines a control thrust force for an UAV to track the generated trajectory. 
    more » « less
  5. Robot arms should be able to learn new tasks. One framework here is reinforcement learning, where the robot is given a reward function that encodes the task, and the robot autonomously learns actions to maximize its reward. Existing approaches to reinforcement learning often frame this problem as a Markov decision process, and learn a policy (or a hierarchy of policies) to complete the task. These policies reason over hundreds of fine-grained actions that the robot arm needs to take: e.g., moving slightly to the right or rotating the end-effector a few degrees. But the manipulation tasks that we want robots to perform can often be broken down into a small number of high-level motions: e.g., reaching an object or turning a handle. In this paper we therefore propose a waypoint-based approach for model-free reinforcement learning. Instead of learning a low-level policy, the robot now learns a trajectory of waypoints, and then interpolates between those waypoints using existing controllers. Our key novelty is framing this waypoint-based setting as a sequence of multi-armed bandits: each bandit problem corresponds to one waypoint along the robot’s motion. We theoretically show that an ideal solution to this reformulation has lower regret bounds than standard frameworks. We also introduce an approximate posterior sampling solution that builds the robot’s motion one waypoint at a time. Results across benchmark simulations and two real-world experiments suggest that this proposed approach learns new tasks more quickly than state-of-the-art baselines. See our website here: https://collab.me.vt.edu/rl-waypoints/ 
    more » « less