Directing groups of unmanned air vehicles (UAVs) is a task that typically requires the full attention of several operators. This can be prohibitive in situations where an operator must pay attention to their surroundings. In this paper we present a gesture device that assists operators in commanding UAVs in focus-constrained environments. The operator influences the UAVs’ behavior by using intuitive hand gesture movements. Gestures are captured using an accelerometer and gyroscope and then classified using a logistic regression model. Ten gestures were chosen to provide behaviors for a group of fixed-wing UAVs. These behaviors specified various searching, following, and tracking patterns that could be used in a dynamic environment. A novel variant of the Monte Carlo Tree Search algorithm was developed to autonomously plan the paths of the cooperating UAVs. These autonomy algorithms were executed when their corresponding gesture was recognized by the gesture device. The gesture device was trained to classify the ten gestures and accurately identified them 95% of the time. Each of the behaviors associated with the gestures was tested in hardware-in-the-loop simulations and the ability to dynamically switch between them was demonstrated. The results show that the system can be used as a natural interface to assist an operator in directing a fleet of UAVs.
A gesture device was created that enables operators to command a group of UAVs in focus-constrained environments. Each gesture triggers high-level commands that direct a UAV group to execute complex behaviors. Software simulations and hardware-in-the-loop testing shows the device is effective in directing UAV groups.
- NSF-PAR ID:
- 10225747
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- SN Applied Sciences
- Volume:
- 3
- Issue:
- 6
- ISSN:
- 2523-3963
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Unmanned aerial vehicles (UAVs) are becoming more common, presenting the need for effective human-robot communication strategies that address the unique nature of unmanned aerial flight. Visual communication via drone flight paths, also called gestures, may prove to be an ideal method. However, the effectiveness of visual communication techniques is dependent on several factors including an observer's position relative to a UAV. Previous work has studied the maximum line-of-sight at which observers can identify a small UAV [1]. However, this work did not consider how changes in distance may affect an observer's ability to perceive the shape of a UAV's motion. In this study, we conduct a series of online surveys to evaluate how changes in line-of-sight distance and gesture size affect observers' ability to identify and distinguish between UAV gestures. We first examine observers' ability to accurately identify gestures when adjusting a gesture's size relative to the size of a UAV. We then measure how observers' ability to identify gestures changes with respect to varying line-of-sight distances. Lastly, we consider how altering the size of a UAV gesture may improve an observer's ability to identify drone gestures from varying distances. Our results show that increasing the gesture size across varying UAV to gesture ratios did not have a significant effect on participant response accuracy. We found that between 17 m and 75 m from the observer, their ability to accurately identify a drone gesture was inversely proportional to the distance between the observer and the drone. Finally, we found that maintaining a gesture's apparent size improves participant response accuracy over changing line-of-sight distances.more » « less
-
IEEE (Ed.)Over past few years, unmanned aircraft vehicles (UAVs) have been becoming more and more popular for various purposes such as surveillance, automated industry, robotics, vehicle guidance, traffic monitoring and control system. It is very important to have multiple methods of UAVs controlling to fit in UAVs usages. The goal of this work was to develop a new technique to control an UAV by using different hand gestures. To achieve this, a hand keypoint detection algorithm was used to detect 21 keypoints in the hand. Then this keypoints were used as the input to an intelligent system based on Convolutional Neural Networks (CNN) that was able to classify the hand gestures. To capture the hand gestures, the video camera of the UAV was used. A database containing 2400 hand images was created and used to train the CNN. The database contained 8 different hand gestures that were selected to send specific motion commands to the UAV. The accuracy of the CNN to classify the hand gestures was 93%. To test the capabilities of our intelligent control system, a small UAV, the DJI Ryze Tello drone, was used. The experimental results demonstrated that the DJI Tello drone was able to be successfully controlled by hand gestures in real time.more » « less
-
The slow emergence of gaze‐ and point‐following: A longitudinal study of infants from 4 to 12 months
Abstract Acquisition of visual attention‐following skills, notably gaze‐ and point‐following, contributes to infants' ability to share attention with caregivers, which in turn contributes to social learning and communication. However, the development of gaze‐ and point‐following in the first 18 months remains controversial, in part because of different testing protocols and standards. To address this, we longitudinally tested
N = 43 low‐risk, North American middle‐class infants' tendency to follow gaze direction, pointing gestures, and gaze‐and‐point combinations. Infants were tested monthly from 4 to 12 months of age. To control motivational differences, infants were taught to expect contingent reward videos in the target locations. No‐cue trials were included to estimate spontaneous target fixation rates. A comparison sample (N = 23) was tested at 9 and 12 months to estimate practice effects. Results showed gradual increases in both gaze‐ and point‐following starting around 7 months, and modest month‐to‐month individual stability from 8 to 12 months. However, attention‐following did not exceed chance levels until after 6 months. Infants rarely followed cues to locations behind them, even at 12 months. Infants followed combined gaze‐and‐point cues more than gaze alone, and followed points at intermediate levels (not reliably different from the other cues). The comparison group's results showed that practice effects did not explain the age‐related increase in attention‐following. The results corroborate and extend previous findings that North American middle‐class infants' attention‐following in controlled laboratory settings increases slowly and incrementally between 6 and 12 months of age.Research Highlights A longitudinal experimental study documented the emergence and developmental trajectories of North American middle‐class infants' visual attention‐following skills, including gaze‐following, point‐following, and gaze‐and‐point‐following.
A new paradigm controlled for factors including motivation, attentiveness, and visual‐search baserates. Motor development was ruled out as a predictor or limiter of the emergence of attention‐following.
Infants did not follow attention reliably until after 6 months, and following increased slowly from 7 to 12 months.
Infants' individual trajectories showed modest month‐to‐month stability from 8 to 12 months of age.
-
Abstract Members of advantaged groups are more likely than members of disadvantaged groups to think, feel, and behave in ways that reinforce their group's position within the hierarchy. This study examined how children's status within a group‐based hierarchy shapes their beliefs about the hierarchy and the groups that comprise it in ways that reinforce the hierarchy. To do this, we randomly assigned children (4–8 years;
N = 123; 75 female, 48 male; 21 Asian, 9 Black, 21 Latino/a, 1 Middle‐Eastern/North‐African, 14 multiracial, 41 White, 16 not‐specified) to novel groups that differed in social status (advantaged, disadvantaged, neutral third‐party) and assessed their beliefs about the hierarchy. Across five separate assessments, advantaged‐group children were more likely to judge the hierarchy to be fair, generalizable, and wrong to challenge and were more likely to hold biased intergroup attitudes and exclude disadvantaged group members. In addition, with age, children in both the advantaged‐ and disadvantaged‐groups became more likely to see membership in their own group as inherited, while at the same time expecting group‐relevant behaviors to be determined more by the environment. With age, children also judged the hierarchy to be more unfair and expected the hierarchy to generalize across contexts. These findings provide novel insights into how children's position within hierarchies can contribute to the formation of hierarchy‐reinforcing beliefs.Research Highlights A total of 123 4–8‐year‐olds were assigned to advantaged, disadvantaged, and third‐party groups within a hierarchy and were assessed on seven hierarchy‐reinforcing beliefs about the hierarchy.
Advantaged children were more likely to say the hierarchy was fair, generalizable, and wrong to challenge and to hold intergroup biases favoring advantaged group members.
With age, advantaged‐ and disadvantaged‐group children held more essentialist beliefs about membership in their own group, but not the behaviors associated with their group.
Results suggest that advantaged group status can shape how children perceive and respond to the hierarchies they are embedded within.
-
Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants' ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture's motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers.more » « less