Peripheral vision is fundamental for many real-world tasks, including walking, driving, and aviation. Nonetheless, there has been no effort to connect these applied literatures to research in peripheral vision in basic vision science or sports science. To close this gap, we analyzed 60 relevant papers, chosen according to objective criteria. Applied research, with its real-world time constraints, complex stimuli, and performance measures, reveals new functions of peripheral vision. Peripheral vision is used to monitor the environment (e.g., road edges, traffic signs, or malfunctioning lights), in ways that differ from basic research. Applied research uncovers new actions that one can perform solely with peripheral vision (e.g., steering a car, climbing stairs). An important use of peripheral vision is that it helps compare the position of one’s body/vehicle to objects in the world. In addition, many real-world tasks require multitasking, and the fact that peripheral vision provides degraded but useful information means that tradeoffs are common in deciding whether to use peripheral vision or move one’s eyes. These tradeoffs are strongly influenced by factors like expertise, age, distraction, emotional state, task importance, and what the observer already knows. These tradeoffs make it hard to infer from eye movements alone what information is gathered from peripheral vision and what tasks we can do without it. Finally, we recommend three ways in which basic, sport, and applied science can benefit each other’s methodology, furthering our understanding of peripheral vision more generally. 
                        more » 
                        « less   
                    
                            
                            Distraction “Hangover”: Characterization of the Delayed Return to Baseline Driving Risk After Distracting Behaviors
                        
                    
    
            Objective We measured how long distraction by a smartphone affects simulated driving behaviors after the tasks are completed (i.e., the distraction hangover). Background Most drivers know that smartphones distract. Trying to limit distraction, drivers can use hands-free devices, where they only briefly glance at the smartphone. However, the cognitive cost of switching tasks from driving to communicating and back to driving adds an underappreciated, potentially long period to the total distraction time. Method Ninety-seven 21- to 78-year-old individuals who self-identified as active drivers and smartphone users engaged in a simulated driving scenario that included smartphone distractions. Peripheral-cue and car-following tasks were used to assess driving behavior, along with synchronized eye tracking. Results The participants’ lateral speed was larger than baseline for 15 s after the end of a voice distraction and for up to 25 s after a text distraction. Correct identification of peripheral cues dropped about 5% per decade of age, and participants from the 71+ age group missed seeing about 50% of peripheral cues within 4 s of the distraction. During distraction, coherence with the lead car in a following task dropped from 0.54 to 0.045, and seven participants rear-ended the lead car. Breadth of scanning contracted by 50% after distraction. Conclusion Simulated driving performance drops dramatically after smartphone distraction for all ages and for both voice and texting. Application Public education should include the dangers of any smartphone use during driving, including hands-free. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1640909
- PAR ID:
- 10290368
- Date Published:
- Journal Name:
- Human Factors: The Journal of the Human Factors and Ergonomics Society
- ISSN:
- 0018-7208
- Page Range / eLocation ID:
- 001872082110122
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Although driving is a complex and multitask activity, it is not unusual for drivers to engage simultaneously in other non-driving related tasks using secondary in-vehicle displays (IVIS). The use of IVIS and its potential negative safety consequences has been investigated over the years. However with the advent and advance of in-vehicle technologies such as augmented-reality head-up displays (AR HUDs), there are increasing opportunities for improving secondary task engagement and decreasing negative safety consequences. In this study, we aim to understand the effects of AR HUD low cognitive load tasks on driving performance during monotonous driving. Adapting NHTSA’s driver distraction guidelines, we conducted a user-study with twenty-four gender-balanced participants that performed secondary AR HUD tasks of different durations while driving in a monotonous environment using a medium-fidelity driving simulator. We performed a mixed-methods analysis to evaluate driver’s perceived workload (NASA-TLX), lateral, and longitudinal driving performance. Although we found that drivers subjectively perceive AR HUD tasks to have a higher cognitive demand; AR tasks resulted in improved driving performance. Conversely, the duration of the secondary tasks had no measurable impacts on performance which suggests that the amount of time spent on tasks has no negative or positive implications on driving performance. We provide evidence that there are potential benefits of secondary AR task engagement; in fact, there are situations in which AR HUDs can improve driver’s alertness and vigilance.more » « less
- 
            null (Ed.)Objective: The objective of this study was to explore how age and sex impact the ability to respond to an emergency when in a self-driving vehicle. Methods: For this study, 60 drivers (male: 48%, female: 52%) of different age groups (teens: aged 16–19, 32%, adults: aged 35–54, 37%, seniors: aged 65þ, 32%) were recruited to share their perspectives on self-driving technology. They were invited to ride in a driving simulator that mimicked a vehicle in autopilot mode (longitudinal and lateral control). Results: In a scenario where the automated vehicle unexpectedly drives toward a closed highway exit, 21% of drivers did not react at all. For this event, where drivers had 6.2 s to avoid a crash, 40% of drivers crashed. Adults aged 35–54 crashed less than other age groups (33% crash rate), whereas teens crashed more (47% crash rate). Seniors had the highest crash rate (50% crash rate). Males (38% crash rate) crashed less than females (43% crash rate). All participants with a reaction time less than 4 s were able to avoid the crash. Conclusions: The results from the simulation drives show that humans lose focus when they do not actively drive so that their response in an emergency does not allow them to reclaim control quickly enough to avoid a crash.more » « less
- 
            ObjectiveThis study examined the impact of monitoring instructions when using an automated driving system (ADS) and road obstructions on post take-over performance in near-miss scenarios. BackgroundPast research indicates partial ADS reduces the driver’s situation awareness and degrades post take-over performance. Connected vehicle technology may alert drivers to impending hazards in time to safely avoid near-miss events. MethodForty-eight licensed drivers using ADS were randomly assigned to either the active driving or passive driving condition. Participants navigated eight scenarios with or without a visual obstruction in a distributed driving simulator. The experimenter drove the other simulated vehicle to manually cause near-miss events. Participants’ mean longitudinal velocity, standard deviation of longitudinal velocity, and mean longitudinal acceleration were measured. ResultsParticipants in passive ADS group showed greater, and more variable, deceleration rates than those in the active ADS group. Despite a reliable audiovisual warning, participants failed to slow down in the red-light running scenario when the conflict vehicle was occluded. Participant’s trust in the automated driving system did not vary between the beginning and end of the experiment. ConclusionDrivers interacting with ADS in a passive manner may continue to show increased and more variable deceleration rates in near-miss scenarios even with reliable connected vehicle technology. Future research may focus on interactive effects of automated and connected driving technologies on drivers’ ability to anticipate and safely navigate near-miss scenarios. ApplicationDesigners of automated and connected vehicle technologies may consider different timing and types of cues to inform the drivers of imminent hazard in high-risk scenarios for near-miss events.more » « less
- 
            Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    