skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 11, 2026

Title: Exploring Vibrotactile Displays to Support Hazard Awareness in Multitasking Control Tasks for Heavy Machinery Work
(1) Background: The safe execution of heavy machinery operations and high-risk construction tasks requires operators to manage multiple tasks, with a constant awareness of coworkers and hazards. With high demands on visual and auditory resources, vibrotactile feedback systems offer a solution to enhance awareness without overburdening vision or hearing. (2) Aim: This study evaluates the impact of vibrotactile feedback regarding proximity to hazards on multitasking performance and cognitive workload in order to support hazard awareness in a controlled task environment. (3) Method: Twenty-four participants performed a joystick-controlled navigation task and a concurrent mental spatial rotation task. Proximity to hazards in the navigation task was conveyed via different encodings of vibrotactile feedback: No Vibration, Intensity-Modulation, Pulse Duration, and Pulse Spacing. Performance metrics, including obstacle collisions, target hits, contact time, and accuracy, were assessed alongside perceived workload. (4) Results: Intensity-Modulated feedback reduced obstacle collisions and proximity time, while lowering workload, compared to No Vibration. No significant effects were found on spatial rotation accuracy, indicating that vibrotactile feedback effectively guides navigation and supports spatial awareness. (5) Conclusions: This study highlights the potential of vibrotactile feedback to improve navigation performance and hazard awareness, offering valuable insights into multimodal safety systems in high-demand environments.  more » « less
Award ID(s):
2026574
PAR ID:
10632917
Author(s) / Creator(s):
; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Safety
Volume:
11
Issue:
1
ISSN:
2313-576X
Page Range / eLocation ID:
26
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null; null; null; null (Ed.)
    For the significant global population of individuals who are blind or visually impaired, spatial awareness during navigation remains a challenge. Tactile Electronic Travel Aids have been designed to assist with the provision of spatiotemporal information, but an intuitive method for mapping this information to patterns on a vibrotactile display remains to be determined. This paper explores the encoding of distance from a navigator to an object using two strategies: absolute and relative. A wearable prototype, the HapBack, is presented with two straps of vertically aligned vibrotactile motors mapped to five distances, with each distance mapped to a row on the display. Absolute patterns emit a single vibration at the row corresponding to a distance, while relative patterns emit a sequence of vibrations starting from the bottom row and ending at the row mapped to that distance. These two encoding strategies are comparatively evaluated for identification accuracy and perceived intuitiveness of mapping among ten adult participants who are blind or visually impaired. No significant difference was found between the intuitiveness of the two encodings based on these metrics, with each showing promising results for application during navigation tasks. 
    more » « less
  2. Abstract A large body of evidence shows that the hippocampus is necessary for successful spatial navigation. Various studies have shown anatomical and functional differences between the dorsal (DHC) and ventral (VHC) portions of this structure. The DHC is primarily involved in spatial navigation and contains cells with small place fields. The VHC is primarily involved in context and emotional encoding contains cells with large place fields and receives major projections from the medial prefrontal cortex. In the past, spatial navigation experiments have used relatively simple tasks that may not have required a strong coordination along the dorsoventral hippocampal axis. In this study, we tested the hypothesis that the DHC and VHC may be critical for goal‐directed navigation in obstacle‐rich environments. We used a learning task in which animals memorize the location of a set of rewarded feeders, and recall these locations in the presence of small or large obstacles. We report that bilateral DHC or VHC inactivation impaired spatial navigation in both large and small obstacle conditions. Importantly, this impairment did not result from a deficit in the spatial memory for the set of feeders (i.e., recognition of the goal locations) because DHC or VHC inactivation did not affect recall performance when there was no obstacle on the maze. We also show that the behavioral performance of the animals was correlated with several measures of maze complexity and that these correlations were significantly affected by inactivation only in the large object condition. These results suggest that as the complexity of the environment increases, both DHC and VHC are required for spatial navigation. 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20–30 m in sunlight (40 m under cloudy skies and 50 m indoors) and adapts dynamically to the demands of the task. We analyze properties of light curtains and various approaches to optimize their thickness as well as power requirements. We showcase the potential of light curtains using a range of real-world scenarios. 
    more » « less
  5. Virtual content into a real environment. There are many factors that can affect the perceived physicality and co-presence of virtual entities, including the hardware capabilities, the fidelity of the virtual behaviors, and sensory feedback associated with the interactions. In this paper, we present a study investigating participants’ perceptions and behaviors during a time-limited search task in close proximity with virtual entities in AR. In particular, we analyze the effects of (i) visual conflicts in the periphery of an optical see-through head-mounted display, a Microsoft HoloLens, (ii) overall lighting in the physical environment, and (iii) multimodal feedback based on vibrotactile transducers mounted on a physical platform. Our results show significant benefits of vibrotactile feedback and reduced peripheral lighting for spatial and social presence, and engagement. We discuss implications of these effects for AR applications. 
    more » « less