skip to main content


Title: SearchGazer: Webcam Eye Tracking for Remote Studies of Web Search
We introduce SearchGazer, a web-based eye tracker for remote web search studies using common webcams already present in laptops and some desktop computers. SearchGazer is a pure JavaScript library that infers the gaze behavior of searchers in real time. The eye tracking model self-calibrates by watching searchers interact with the search pages and trains a mapping of eye features to gaze locations and search page elements on the screen. Contrary to typical eye tracking studies in information retrieval, this approach does not require the purchase of any additional specialized equipment, and can be done remotely in a user's natural environment, leading to cheaper and easier visual attention studies. While SearchGazer is not intended to be as accurate as specialized eye trackers, it is able to replicate many of the research findings of three seminal information retrieval papers: two that used eye tracking devices, and one that used the mouse cursor as a restricted focus viewer. Charts and heatmaps from those original papers are plotted side-by-side with SearchGazer results. While the main results are similar, there are some notable differences, which we hypothesize derive from improvements in the latest ranking technologies used by current versions of search engines and diligence by remote users. As part of this paper, we also release SearchGazer as a library that can be integrated into any search page.  more » « less
Award ID(s):
1464061 1552663
NSF-PAR ID:
10024075
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval - CHIIR '17
Page Range / eLocation ID:
17-26
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We introduce WebGazer, an online eye tracker that uses common webcams already present in laptops and mobile devices to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model self-calibrates by watching web visitors interact with the web page and trains a mapping between features of the eye and positions on the screen. This approach aims to provide a natural experience to everyday users that is not restricted to laboratories and highly controlled user studies. WebGazer has two key components: a pupil detector that can be combined with any eye detection library, and a gaze estimator using regression analysis informed by user interactions. We perform a large remote online study and a small in-person study to evaluate WebGazer. The findings show that WebGazer can learn from user interactions and that its accuracy is sufficient for approximating the user's gaze. As part of this paper, we release the first eye tracking library that can be easily integrated in any website for real-time gaze interactions, usability studies, or web research. 
    more » « less
  2. Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort. 
    more » « less
  3. Abstract

    Measuring eye movements remotely via the participant's webcam promises to be an attractive methodological addition to in‐person eye‐tracking in the lab. However, there is a lack of systematic research comparing remote web‐based eye‐tracking with in‐lab eye‐tracking in young children. We report a multi‐lab study that compared these two measures in an anticipatory looking task with toddlers using WebGazer.js and jsPsych. Results of our remotely tested sample of 18‐27‐month‐old toddlers (N = 125) revealed that web‐based eye‐tracking successfully captured goal‐based action predictions, although the proportion of the goal‐directed anticipatory looking was lower compared to the in‐lab sample (N = 70). As expected, attrition rate was substantially higher in the web‐based (42%) than the in‐lab sample (10%). Excluding trials based on visual inspection of the match of time‐locked gaze coordinates and the participant's webcam video overlayed on the stimuli was an important preprocessing step to reduce noise in the data. We discuss the use of this remote web‐based method in comparison with other current methodological innovations. Our study demonstrates that remote web‐based eye‐tracking can be a useful tool for testing toddlers, facilitating recruitment of larger and more diverse samples; a caveat to consider is the larger drop‐out rate.

     
    more » « less
  4. null (Ed.)
    With the demand and abundance of information increasing over the last two decades, generations of computer scientists are trying to improve the whole process of information searching, retrieval, and storage. With the diversification of the information sources, users' demand for various requirements of the data has also changed drastically both in terms of usability and performance. Due to the growth of the source material and requirements, correctly sorting, filtering, and storing has given rise to many new challenges in the field. With the help of all four other teams on this project, we are developing an information retrieval, analysis, and storage system to retrieve data from Virginia Tech's Electronic Thesis and Dissertation (ETD), Twitter, and Web Page archives. We seek to provide an appropriate data research and management tool to the users to access specific data. The system will also give certain users the authority to manage and add more data to the system. This project's deliverable will be combined with four others to produce a system usable by Virginia Tech's library system to manage, maintain, and analyze these archives. This report attempts to introduce the system components and design decisions regarding how it has been planned and implemented. Our team has developed a front end web interface that is able to search, retrieve, and manage three important content collection types: ETDs, tweets, and web pages. The interface incorporates a simple hierarchical user permission system, providing different levels of access to its users. In order to facilitate the workflow with other teams, we have containerized this system and made it available on the Virginia Tech cloud server. The system also makes use of a dynamic workflow system using a KnowledgeGraph and Apache Airflow, providing high levels of functional extensibility to the system. This allows curators and researchers to use containerised services for crawling, pre-processing, parsing, and indexing their custom corpora and collections that are available to them in the system. 
    more » « less
  5. Background Home health aides (HHAs) provide necessary hands-on care to older adults and those with chronic conditions in their homes. Despite their integral role, HHAs experience numerous challenges in their work, including their ability to communicate with other health care professionals about patient care while caring for patients and access to educational resources. Although technological interventions have the potential to address these challenges, little is known about the technological landscape and existing technology-based interventions designed for and used by this workforce. Objective We conducted a scoping review of the scientific literature to identify existing studies that have described, designed, deployed, or tested technology-based tools and apps intended for use by HHAs to care for patients at home. To complement our literature review, we conducted a landscape analysis of existing mobile apps intended for HHAs providing in-home care. Methods We searched the following databases from their inception to October 2020: Ovid MEDLINE, Ovid Embase, Cochrane Library, and CINAHL (EBSCO). A total of 3 researchers screened the yield using prespecified inclusion and exclusion criteria. In addition, 4 researchers independently reviewed these articles, and a fifth researcher arbitrated when needed. Among studies that met the inclusion criteria, data were extracted and summarized narratively. An analysis of mobile health apps designed for HHAs was performed using a predefined set of terms to search Google Play and Apple App stores. Overall, 2 researchers independently screened the resulting apps, and those that met the inclusion criteria were categorized according to their intended purpose and functionality. Results Of the 8643 studies retrieved, 182 (2.11%) underwent full-text review, and 4.9% (9/182) met our inclusion criteria. Approximately half (4/9, 44%) of the studies were descriptive in nature, proposing technology-based systems (eg, web portals and dashboards) or prototypes without a technical or user-based evaluation of the technology. In most (7/9, 78%) papers, HHAs were just one of several users and not the sole or primary intended users of the technology. Our review of mobile apps yielded 166 Android and iOS apps, of which 48 (29%) met the inclusion criteria. These apps provided HHAs with one or more of the following functions: electronic visit verification (29/48, 60%), clocking in and out (23/48, 48%), documentation (22/48, 46%), task checklist (19/48, 40%), communication between HHA and agency (14/48, 29%), patient information (6/48, 13%), resources (5/48, 10%), and communication between HHA and patients (4/48, 8%). Of the 48 apps, 25 (52%) performed monitoring functions, 4 (8%) performed supporting functions, and 19 (40%) performed both. Conclusions A limited number of studies and mobile apps have been designed to support HHAs in their work. Further research and rigorous evaluation of technology-based tools are needed to assess their impact on the work HHAs provide in patient’s homes. 
    more » « less