We propose an accessible indoor navigation application. The solution integrates information of floor plans, Bluetooth beacons, Wi-Fi/cellular data connectivity, 2D/3D visual models, and user preferences. Hybrid models of interiors are created in a modeling stage with Wi-Fi/ cellular data connectivity, beacon signal strength, and a 3D spatial model. This data is collected, as the modeler walks through the building, and is mapped to the floor plan. Client-server architecture allows scaling to large areas by lazy-loading models according to beacon signals and/or adjacent region proximity. During the navigation stage, a user with the designed mobile app is localized within the floor plan, using visual, connectivity, and user preference data, along an optimal route to their destination. User interfaces for both modeling and navigation use visual, audio, and haptic feedback for targeted users. While the current pandemic event precludes our user study, we describe its design and preliminary results. 
                        more » 
                        « less   
                    
                            
                            iASSIST: An iPhone-Based Multimedia Information System for Indoor Assistive Navigation
                        
                    
    
            The iASSIST is an iPhone-based assistive sensor solution for independent and safe travel for people who are blind or visually impaired, or those who simply face challenges in navigating an unfamiliar indoor environment. The solution integrates information of Bluetooth beacons, data connectivity, visual models, and user preferences. Hybrid models of interiors are created in a modeling stage with these multimodal data, collected, and mapped to the floor plan as the modeler walks through the building. Client-server architecture allows scaling to large areas by lazy-loading models according to beacon signals and/or adjacent region proximity. During the navigation stage, a user with the navigation app is localized within the floor plan, using visual, connectivity, and user preference data, along an optimal route to their destination. User interfaces for both modeling and navigation use multimedia channels, including visual, audio, and haptic feedback for targeted users. The design of human subject test experiments is also described, in addition to some preliminary experimental results. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1740622
- PAR ID:
- 10252134
- Date Published:
- Journal Name:
- International Journal of Multimedia Data Engineering and Management
- Volume:
- 11
- Issue:
- 4
- ISSN:
- 1947-8534
- Page Range / eLocation ID:
- 38 to 59
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Indoor navigation systems are very useful in large complex indoor environments such as shopping malls. Current systems focus on improving indoor localization accuracy and must be combined with an accurate labeled floor plan to provide usable indoor navigation services. Such labeled floor plans are often unavailable or involve a prohibitive cost to manually obtain. In this paper, we present IndoorWaze, a novel crowdsourcing-based context-aware indoor navigation system that can automatically generate an accurate context-aware floor plan with labeled indoor POIs for the first time in literature. IndoorWaze combines the Wi-Fi fingerprints of indoor walkers with the Wi-Fi fingerprints and POI labels provided by POI employees to produce a high-fidelity labeled floor plan. As a lightweight crowdsourcing-based system, IndoorWaze involves very little effort from indoor walkers and POI employees. We prototype IndoorWaze on Android smartphones and evaluate it in a large shopping mall. Our results show that IndoorWaze can generate a high-fidelity labeled floor plan, in which all the stores are correctly labeled and arranged, all the pathways and crossings are correctly shown, and the median estimation error for the store dimension is below 12%.more » « less
- 
            Indoor navigation in complex building environments poses significant challenges, particularly for individuals who are unfamiliar with their surroundings. Mixed reality (MR) technologies have emerged as a promising solution to enhance situational awareness and facilitate navigation within indoor spaces. However, there is a lack of spatial data for indoor environments, including outdated floor plans and limited real-time operational data. This paper presents the development of a mixed-reality application for indoor building navigation and evacuation. The application uses feature extraction for location sensing and situational awareness to provide accurate and reliable navigation in any indoor environment using Microsoft HoloLens. The application can track the user's position and orientation and give the user-specific information on how to evacuate the building. This information is then used to generate navigation instructions for the user. We demonstrate how this mixed reality HoloLens application can provide spatially contextualized 3D visualizations that promote spatial knowledge acquisition and situational awareness. These 3D visualizations are developed as an emergency evacuation and navigation tool to aid the building occupants in safe and quick evacuation. Experimental results demonstrate the effectiveness of the application, providing 3D visualizations of multilevel spaces and aiding individuals in understanding their position and evacuation path during emergencies. We believe that adopting mixed reality technologies, such as the HoloLens, can greatly enhance individuals' ability to navigate large-scale environments during emergencies by promoting spatial knowledge acquisition and supporting cognitive mapping.more » « less
- 
            During emergencies like fire and smoke or active shooter events, there is a need to address the vulnerability and assess plans for evacuation. With the recent improvements in technology for smartphones, there is an opportunity for geo-visual environments that offer experiential learning by providing spatial analysis and visual communication of emergency-related information to the user. This paper presents the development and evaluation of the mobile augmented reality application (MARA) designed specifically for acquiring spatial analysis, situational awareness, and visual communication. The MARA incorporates existing permanent features such as room numbers and signages in the building as markers to display the floor plan of the building and show navigational directions to the exit. Through visualization of integrated geographic information systems and real-time data analysis, MARA provides the current location of the person, the number of exits, and user-specific personalized evacuation routes. The paper also describes a limited user study that was conducted to assess the usability and effectiveness of the MARA application using the widely recognized System Usability Scale (SUS) framework. The results show the effectiveness of our situational awareness-based MARA in multilevel buildings for evacuations, educational, and navigational purposes.more » « less
- 
            Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user’s location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user’s direction by exploiting the 2D–3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra’s algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera’s intrinsic parameters, such as focal length.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    