- Award ID(s):
- 1730181
- NSF-PAR ID:
- 10148766
- Date Published:
- Journal Name:
- 11th ACM Symposium on Eye tracking Research and Applications (ETRA)
- Page Range / eLocation ID:
- 4 pages
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Stack Overflow is commonly used by software developers to help solve problems they face while working on software tasks such as fixing bugs or building new features. Recent research has explored how the content of Stack Overflow posts affects attraction and how the reputation of users attracts more visitors. However, there is very little evidence on the effect that visual attractors and content quantity have on directing gaze toward parts of a post, and which parts hold the attention of a user longer. Moreover, little is known about how these attractors help developers (students and professionals) answer comprehension questions. This paper presents an eye tracking study on thirty developers constrained to reading only Stack Overflow posts while summarizing four open source methods or classes. Results indicate that on average paragraphs and code snippets were fixated upon most often and longest. When ranking pages by number of appearance of code blocks and paragraphs, we found that while the presence of more code blocks did not affect number of fixations, the presence of increasing numbers of plain text paragraphs significantly drove down the fixations on comments. SO posts that were looked at only by students had longer fixation times on code elements within the first ten fixations. We found that 16 developer summaries contained 5 or more meaningful terms from SO posts they viewed. We discuss how our observations of reading behavior could benefit how users structure their posts.more » « less
-
Instant access to personal data is a double-edged sword and it has transformed society. It enhances convenience and interpersonal interactions through social media, while also making us all more vulnerable to identity theft and cybercrime. The need for hack-resistant biometric authentication is greater than ever. Previous studies have demonstrated that eye movements differ between individuals, so the characterization eye movements might provide a highly secure and convenient approach to personal identification, because eye movements are generated by the owner’s living brain in real-time and are therefore extremely difficult to imitate by hackers. To study the potential of eye movements as a biometric tool, we characterized the eye movements of 18 participants. We examined an entire battery of oculomotor behaviors, including the unconscious eye movements that occur during ocular fixation; this resulted in a high precision oculomotor signature that can identify individuals. We show that one-versus-one machine learning classification, applied with a nearest neighbor statistic, yielded an accuracy of >99% based with ~25minute sessions, during which participants executed fixations, visual pursuits, free viewing of images, etc. Even if we just examine the ~3 minutes in which participants executed the fixation task by itself, discrimination accuracy was higher than 96%. When we further split the fixation data randomly into 30 sec chunks, we obtained a remarkably high accuracy of 92%. Because eye-trackers provide improved spatial and temporal resolution with each new generation, we expect that both accuracy and the minimum sample duration necessary for reliable oculomotor biometric verification can be further optimized.more » « less
-
Observable reading behavior, the act of moving the eyes over lines of text, is highly stereotyped among the users of a language, and this has led to the development of reading detectors–methods that input windows of sequential fixations and output predictions of the fixation behavior during those windows being reading or skimming. The present study introduces a newmethod for reading detection using Region Ranking SVM (RRSVM). An SVM-based classifier learns the local oculomotor features that are important for real-time reading detection while it is optimizing for the global reading/skimming classification, making it unnecessary to hand-label local fixation windows for model training. This RRSVM reading detector was trained and evaluated using eye movement data collected in a laboratory context, where participants viewed modified web news articles and had to either read them carefully for comprehension or skim them quickly for the selection of keywords (separate groups). Ground truth labels were known at the global level (the instructed reading or skimming task), and obtained at the local level in a separate rating task. The RRSVM reading detector accurately predicted 82.5% of the global (article-level) reading/skimming behavior, with accuracy in predicting local window labels ranging from 72-95%, depending on how tuned the RRSVM was for local and global weights. With this RRSVM reading detector, a method now exists for near real-time reading detection without the need for hand-labeling of local fixation windows. With real-time reading detection capability comes the potential for applications ranging from education and training to intelligent interfaces that learn what a user is likely to know based on previous detection of their reading behavior.more » « less
-
null (Ed.)Studies of eye movements during source code reading have supported the idea that reading source code differs fundamentally from reading natural text. The paper analyzed an existing data set of natural language and source code eye movement data using the E-Z reader model of eye movement control. The results show that the E-Z reader model can be used with natural text and with source code where it provides good predictions of eye movement duration. This result is confirmed by comparing model predictions to eye-movement data from this experiment and calculating the correlation score for each metric. Finally, it was found that gaze duration is influenced by token frequency in code and in natural text. The frequency effect is less pronounced on first fixation duration and single fixation duration. An eye movement control model for source code reading may open the door for tools in education and the industry to enhance program comprehension.more » « less
-
Abstract Research on infant and toddler reaching has shown evidence for motor planning after the initiation of the reaching action. However, the reach action sequence does not begin after the initiation of a reach but rather includes the initial visual fixations onto the target object occurring before the reach. We developed a paradigm that synchronizes head‐mounted eye‐tracking and motion capture to determine whether the latency between the first visual fixation on a target object and the first reaching movement toward the object predicts subsequent reaching behavior in toddlers. In a corpus of over one hundred reach sequences produced by 17 toddlers, we found that longer fixation‐reach latencies during the pre‐reach phase predicted slower reaches. If the slowness of an executed reach indicates reach difficulty, then the duration of pre‐reach planning would be correlated with reach difficulty. However, no relation was found with pre‐reach planning duration when reach difficulty was measured by usual factors and independent of reach duration. The findings raise important questions about the measurement of reach difficulty, models of motor control, and possible developmental changes in the relations between pre‐planning and continuously unfolding motor plans throughout an action sequence.