skip to main content


Title: Defining A Design Space of The Auto-Mobile Office: A Computational Abstraction Hierarchy Analysis
One advantage of highly automated vehicles is drivers can use commute time for non-driving tasks, such as work-related tasks. The potential for an auto-mobile office—a space where drivers work in automated vehicles—is a complex yet underexplored idea. This paper begins to define a design space of the auto- mobile office in SAE Level 3 automated vehicles by integrating the affinity diagram (AD) with a computational representation of the abstraction hierarchy (AH). The AD uses a bottom-up approach where researchers starting with individual findings aggregate and abstract those into higher-level concepts. The AH uses a top-down approach where researchers start with first principles to identify means-ends links between system goals and concrete forms of the system. Using the programming language R, the means-ends links of AH can be explored statistically. This computational approach to the AH provides a systematic means to define the design space of the auto-mobile office.  more » « less
Award ID(s):
1839484
NSF-PAR ID:
10354849
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
64
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
293 to 297
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This video shows a concept of a future mobile office in a semi-automated vehicle that uses augmented reality. People perform non-driving tasks in current, non-automated vehicles even though that is unsafe. Moreover, even for passengers there is limited space, it is not social, and there can be motion sickness. In future cars, technology such as augmented reality might alleviate some of these issues. Our concept shows how augmented reality can project a remote conversant onto the dashboard. Thereby, the driver can keep an occasional eye on the road while the automated vehicle drives, and might experience less motion sickness. Potentially, this concept might even be used for group calls or for group activities such as karaoke, thereby creating a social setting. We also demonstrate how integration with an intelligent assistant (through speech and gesture analysis) might save the driver from having to grab a calendar to write things down, again allowing them to focus on the road. 
    more » « less
  2. Self-driving vehicles are the latest innovation in improving personal mobility and road safety by removing arguably error-prone humans from driving-related tasks. Such advances can prove especially beneficial for people who are blind or have low vision who cannot legally operate conventional motor vehicles. Missing from the related literature, we argue, are studies that describe strategies for vehicle design for these persons. We present a case study of the participatory design of a prototype for a self-driving vehicle human-machine interface (HMI) for a graduate-level course on inclusive design and accessible technology. We reflect on the process of working alongside a co-designer, a person with a visual disability, to identify user needs, define design ideas, and produce a low-fidelity prototype for the HMI. This paper may benefit researchers interested in using a similar approach for designing accessible autonomous vehicle technology. INTRODUCTION The rise of autonomous vehicles (AVs) may prove to be one of the most significant innovations in personal mobility of the past century. Advances in automated vehicle technology and advanced driver assistance systems (ADAS) specifically, may have a significant impact on road safety and a reduction in vehicle accidents (Brinkley et al., 2017; Dearen, 2018). According to the Department of Transportation (DoT), automated vehicles could help reduce road accidents caused by human error by as much as 94% (SAE International, n.d.). In addition to reducing traffic accidents and saving lives and property, autonomous vehicles may also prove to be of significant value to persons who cannot otherwise operate conventional motor vehicles. AVs may provide the necessary mobility, for instance, to help create new employment opportunities for nearly 40 million Americans with disabilities (Claypool et al., 2017; Guiding Eyes for the Blind, 2019), for instance. Advocates for the visually impaired specifically have expressed how “transformative” this technology can be for those who are blind or have significant low vision (Winter, 2015); persons who cannot otherwise legally operate a motor vehicle. While autonomous vehicles have the potential to break down transportation 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. In the present paper, we present a user study with an advanced-driver assistance system (ADAS) using augmented reality (AR) cues to highlight pedestrians and vehicles when approaching intersections of varying complexity. Our major goal is to understand the relationship between the presence and absence of AR, driver-initiated takeover rates and glance behavior when using a SAE Level 2 autonomous vehicle. Therefore, a user-study with eight participants on a medium-fidelity driving simulator was carried out. Overall, we found that AR cues can provide promising means to increase the system transparency, drivers’ situation awareness and trust in the system. Yet, we suggest that the dynamic glance allocation of attention during partially automated vehicles is still challenging for researchers as we still have much to understand and explore when AR cues become a distractor instead of an attention guider. 
    more » « less
  5. null (Ed.)
    In the present paper, we present a user study with an advanced-driver assistance system (ADAS) using augmented reality (AR) cues to highlight pedestrians and vehicles when approaching intersections of varying complexity. Our major goal is to understand the relationship between the presence and absence of AR, driver-initiated takeover rates and glance behavior when using a SAE Level 2 autonomous vehicle. Therefore, a user-study with eight participants on a medium-fidelity driving simulator was carried out. Overall, we found that AR cues can provide promising means to increase the system transparency, drivers’ situation awareness and trust in the system. Yet, we suggest that the dynamic glance allocation of attention during partially automated vehicles is still challenging for researchers as we still have much to understand and explore when AR cues become a distractor instead of an attention guider. 
    more » « less