Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a nite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.
more »
« less
Semantic gaze labeling for human-robot shared manipulation
Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a finite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.
more »
« less
- Award ID(s):
- 1755823
- PAR ID:
- 10142359
- Date Published:
- Journal Name:
- Symposium on Eye Tracking Research and Applications (ETRA)
- Page Range / eLocation ID:
- 1 to 9
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
ObjectiveThe goal of this study was to assess how different real-time gaze sharing visualization techniques affect eye tracking metrics, workload, team situation awareness (TSA), and team performance. BackgroundGaze sharing is a real-time visualization technique that allows teams to know where their team members are looking on a shared display. Gaze sharing visualization techniques are a promising means to improve collaborative performance on simple tasks; however, there needs to be validation of gaze sharing with more complex and dynamic tasks. MethodsThis study evaluated the effect of gaze sharing on eye tracking metrics, workload, team SA, and team performance in a simulated unmanned aerial vehicle (UAV) command-and-control task. Thirty-five teams of two performed UAV tasks under three conditions: one with no gaze sharing and two with gaze sharing. Gaze sharing was presented using a fixation dot (i.e., a translucent colored dot) and a fixation trail (i.e., a trail of the most recent fixations). ResultsThe results showed that the fixation trail significantly reduced saccadic activity, lowered workload, supported team SA at all levels, and improved performance compared to no gaze sharing; however, the fixation dot had the opposite effect on performance and SA. In fact, having no gaze sharing outperformed the fixation dot. Participants also preferred the fixation trail for its visibility and ability to track and monitor the history of their partner’s gaze. ConclusionThe results showed that gaze sharing has the potential to support collaboration, but its effectiveness depends highly on the design and context of use. ApplicationThe findings suggest that gaze sharing visualization techniques, like the fixation trail, have the potential to improve teamwork in complex UAV tasks and could have broader applicability in a variety of collaborative settings.more » « less
-
Abstract Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforward pass. In this study, we designed a dual-stream vision model inspired by the human brain. This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation. Trained on image recognition, this model examines an image through a sequence of fixations, each time focusing on different parts, thereby progressively building a representation of the image. We evaluated this model against various benchmarks in terms of object recognition, gaze behavior, and adversarial robustness. Our findings suggest that the model can attend and gaze in ways similar to humans without being explicitly trained to mimic human attention and that the model can enhance robustness against adversarial attacks due to its retinal sampling and recurrent processing. In particular, the model can correct its perceptual errors by taking more glances, setting itself apart from all feedforward-only models. In conclusion, the interactions of retinal sampling, eye movement, and recurrent dynamics are important to human-like visual exploration and inference.more » « less
-
We introduce WebGazer, an online eye tracker that uses common webcams already present in laptops and mobile devices to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model self-calibrates by watching web visitors interact with the web page and trains a mapping between features of the eye and positions on the screen. This approach aims to provide a natural experience to everyday users that is not restricted to laboratories and highly controlled user studies. WebGazer has two key components: a pupil detector that can be combined with any eye detection library, and a gaze estimator using regression analysis informed by user interactions. We perform a large remote online study and a small in-person study to evaluate WebGazer. The findings show that WebGazer can learn from user interactions and that its accuracy is sufficient for approximating the user's gaze. As part of this paper, we release the first eye tracking library that can be easily integrated in any website for real-time gaze interactions, usability studies, or web research.more » « less
-
Shared control systems can make complex robot teleoperation tasks easier for users. These systems predict the user’s goal, determine the motion required for the robot to reach that goal, and combine that motion with the user’s input. Goal prediction is generally based on the user’s control input (e.g., the joystick signal). In this paper, we show that this prediction method is especially effective when users follow standard noisily optimal behavior models. In tasks with input constraints like modal control, however, this effectiveness no longer holds, so additional sources for goal prediction can improve assistance. We implement a novel shared control system that combines natural eye gaze with joystick input to predict people’s goals online, and we evaluate our system in a real-world, COVID-safe user study. We find that modal control reduces the efficiency of assistance according to our model, and when gaze provides a prediction earlier in the task, the system’s performance improves. However, gaze on its own is unreliable and assistance using only gaze performs poorly. We conclude that control input and natural gaze serve different and complementary roles in goal prediction, and using them together leads to improved assistance.more » « less
An official website of the United States government

