Abstract External representations powerfully support and augment complex human behavior. When navigating, people often consult external representations to help them find the way to go, but do maps or verbal instructions improve spatial knowledge or support effective wayfinding? Here, we examine spatial knowledge with and without external representations in two studies where participants learn a complex virtual environment. In the first study, we asked participants to generate their own maps or verbal instructions, partway through learning. We found no evidence of improved spatial knowledge in a pointing task requiring participants to infer the direction between two targets, either on the same route or on different routes, and no differences between groups in accurately recreating a map of the target landmarks. However, as a methodological note, pointing was correlated with the accuracy of the maps that participants drew. In the second study, participants had access to an accurate map or set of verbal instructions that they could study while learning the layout of target landmarks. Again, we found no evidence of differentially improved spatial knowledge in the pointing task, although we did find that the map group could recreate a map of the target landmarks more accurately. However, overall improvement was high. There was evidence that the nature of improvement across all conditions was specific to initial navigation ability levels. Our findings add to a mixed literature on the role of external representations for navigation and suggest that more substantial intervention—more scaffolding, explicit training, enhanced visualization, perhaps with personalized sequencing—may be necessary to improve navigation ability.
more »
« less
Yellowknives Dene and Gwich’in Stellar Wayfinding in Large-Scale Subarctic Landscapes
Indigenous systems of stellar wayfinding are rarely described or robustly attested outside of maritime contexts, with few examples reported among peoples of the high Arctic and some desert regions. However, like other large-scale environments that exhibit a low legibility of landmarks, the barrenlands of the Northwest Territories and the Yukon Flats of Alaska generally lack views of prominent or distinguishing topography for using classic route-based navigation. When travelling off trails and waterways in these respective inland subarctic environments, the Yellowknives Dene and the Alaskan Gwich’in utilize drastically different stellar wayfinding approaches from one another while essentially sharing the same view of the sky. However, in both systems the use of celestial schemata is suspended in favor of route-based navigation when the traveller intersects a familiar geographical feature or trail near their target destination, suggesting strong preference for orienting by landmarks when available. A comparison of both wayfinding systems suggests that large-scale environments that lack a readily discernible ground pattern may be more conducive to the development and implementation of a celestial wayfinding schema when combined with other influential factors such as culture, individual experience, and travel behavior. These are likely the first stellar wayfinding systems described in detail for any inland subarctic culture.
more »
« less
- Award ID(s):
- 1753650
- PAR ID:
- 10346070
- Date Published:
- Journal Name:
- ARCTIC
- Volume:
- 75
- Issue:
- 2
- ISSN:
- 0004-0843
- Page Range / eLocation ID:
- 180 to 197
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
GPS accuracy is poor in indoor environments and around buildings. Thus, reading and following signs still remains the most common mechanism for providing and receiving wayfinding information in such spaces. This puts individuals who are blind or visually impaired (BVI) at a great disadvantage. This work designs, implements, and evaluates a wayfinding system and smartphone application called CityGuide that can be used by BVI individuals to navigate their surroundings beyond what is possible with just a GPS-based system. CityGuide enables an individual to query and get turn-by-turn shortest route directions from an indoor location to an outdoor location. CityGuide leverages recently developed indoor wayfinding solutions in conjunction with GPS signals to provide a seamless indoor-outdoor navigation and wayfinding system that guides a BVI individual to their desired destination through the shortest route. Evaluations of CityGuide with BVI human subjects navigating between an indoor starting point to an outdoor destination within an unfamiliar university campus scenario showed it to be effective in reducing end-to-end navigation times and distances of almost all participants.more » « less
-
This paper presents a brief overview of the various (related) research the author has been involved with in the area of navigation and wayfinding for people with visual impairments. The first major piece of research presented is that of the building and deployment of a beacon-based indoor navigation and wayfinding system called GuideBeacon for people with visual impairments. The second major piece of research presented is a broader community-based effort called CityGuide to enable various location-based services (including navigation and wayfinding) in both indoor and outdoor environments for people with disabilities. The paper concludes by summarizing a specific challenge in the area that warrant future research attention.more » « less
-
Maps have long been a favored tool for navigation in both physical and virtual environments. As a navigation aid in virtual reality, map content and appearance can differ significantly. In this paper, three mini-maps are addressed: the WiM-3DMap, which provides a standard World-in-Miniature of the city model; the novel UC-3DMap, featuring important landmarks alongside ordinary buildings within the user’s vicinity; and the LM-3DMap, presenting only important landmarks. These mini-maps offer varying levels of building detail, potentially affecting spatial knowledge acquisition performance in diverse ways. A comparative study was conducted to evaluate the effectiveness of WiM-3DMap, UC-3DMap, LM-3DMap, and a baseline condition without a mini-map in spatial tasks such as spatial updating, landmark recall, landmark placement, and route recall. The findings demonstrated that LM-3DMap and UC-3DMap outperform WiM-3DMap in the tasks of spatial updating, landmark placement and route recall. However, the absence of detailed local context around the user may impede the effectiveness of LM-3DMap, as evidenced by UC-3DMap’s superior performance in the landmark placement task. These findings underscore the differences in effectiveness among various mini-maps that present distinct levels of building detail. A key conclusion is that including ordinary building information in the user’s immediate surroundings can significantly enhance the performance of a mini-map relying solely on landmarks.more » « less
-
Internet image collections containing photos captured by crowds of photographers show promise for enabling digital exploration of large‐scale tourist landmarks. However, prior works focus primarily on geometric reconstruction and visualization, neglecting the key role of language in providing a semantic interface for navigation and fine‐grained understanding. In more constrained 3D domains, recent methods have leveraged modern vision‐and‐language models as a strong prior of 2D visual semantics. While these models display an excellent understanding of broad visual semantics, they struggle with unconstrained photo collections depicting such tourist landmarks, as they lack expert knowledge of the architectural domain and fail to exploit the geometric consistency of images capturing multiple views of such scenes. In this work, we present a localization system that connects neural representations of scenes depicting large‐scale landmarks with text describing a semantic region within the scene, by harnessing the power of SOTA vision‐and‐language models with adaptations for understanding landmark scene semantics. To bolster such models with fine‐grained knowledge, we leverage large‐scale Internet data containing images of similar landmarks along with weakly‐related textual information. Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts, whose semantics may be unlocked from Internet textual metadata with large language models. We use correspondences between views of scenes to bootstrap spatial understanding of these semantics, providing guidance for 3D‐compatible segmentation that ultimately lifts to a volumetric scene representation. To evaluate our method, we present a new benchmark dataset containing large‐scale scenes with ground‐truth segmentations for multiple semantic concepts. Our results show that HaLo‐NeRF can accurately localize a variety of semantic concepts related to architectural landmarks, surpassing the results of other 3D models as well as strong 2D segmentation baselines. Our code and data are publicly available at https://tau‐vailab.github.io/HaLo‐NeRF/more » « less
An official website of the United States government

