skip to main content


Title: All-in-One Urban Mobility Mapping Application with Optional Routing Capabilities
To create safer and less congested traffic operating environments researchers at the University of Tennessee at Chattanooga (UTC) and the Georgia Tech Research Institute (GTRI) have fostered a vision of cooperative sensing and cooperative mobility. This vision is realized in a mobile application that combines visual data extracted from cameras on roadway infrastructure with a user’s coordinates via a GPS-enabled device to create a visual representation of the driving or walking environment surrounding the application user. By merging the concepts of computer vision, object detection, and mono-vision image depth calculation, this application is able to gather absolute Global Positioning System (GPS) coordinates from a user’s mobile device and combine them with relative GPS coordinates determined by the infrastructure cameras and determine the position of vehicles and pedestrians without the knowledge of their absolute GPS coordinates. The joined data is then used by an iOS mobile application to display a map showing the location of other entities such as vehicles, pedestrians, and obstacles creating a real-time visual representation of the surrounding area prior to the area appearing in the user’s visual perspective. Furthermore, a feature was implemented to display routing by using the results of a traffic scenario that was analyzed by rerouting algorithms in a simulated environment. By displaying where proximal entities are concentrated and showing recommended optional routes, users have the ability to be more informed and aware when making traffic decisions helping ensure a higher level of overall safety on our roadways. This vision would not be possible without high speed gigabit network infrastructure installed in Chattanooga, Tennessee and UTC’s wireless testbed, which was used to test many functions of this application. This network was required to reduce the latency of the massive amount of data generated by the infrastructure and vehicles that utilize the testbed; having results from this data come back in real-time is a critical component.  more » « less
Award ID(s):
1647167
NSF-PAR ID:
10083466
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Big data
ISSN:
2167-6461
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Vehicle-to-pedestrian communication could significantly improve pedestrian safety at signalized intersections. However, it is unlikely that pedestrians will typically be carrying a low latency communication-enabled device with an activated pedestrian safety application in their hand-held device all the time. Because of this, multiple traffic cameras at a signalized intersection could be used to accurately detect and locate pedestrians using deep learning, and broadcast safety alerts related to pedestrians to warn connected and automated vehicles around signalized intersections. However, the unavailability of high-performance roadside computing infrastructure and the limited network bandwidth between traffic cameras and the computing infrastructure limits the ability of real-time data streaming and processing for pedestrian detection. In this paper, we describe an edge computing-based real-time pedestrian detection strategy that combines a pedestrian detection algorithm using deep learning and an efficient data communication approach to reduce bandwidth requirements while maintaining high pedestrian detection accuracy. We utilize a lossy compression technique on traffic camera data to determine the tradeoff between the reduction of the communication bandwidth requirements and a defined pedestrian detection accuracy. The performance of the pedestrian detection strategy is measured in relation to pedestrian classification accuracy with varying peak signal-to-noise ratios. The analyses reveal that we detect pedestrians by maintaining a defined detection accuracy with a peak signal-to-noise ratio 43 dB while reducing the communication bandwidth from 9.82 Mbits/sec to 0.31 Mbits/sec, a 31× reduction. 
    more » « less
  2. Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user’s location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user’s direction by exploiting the 2D–3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra’s algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera’s intrinsic parameters, such as focal length. 
    more » « less
  3. null (Ed.)
    Cities offer extensive facilities to enrich the quality of life by utilizing smart devices and sensors. The Internet of Things and smart sensors connect various city services with the inhabitants. The services should be convenient and accessible to all, especially pedestrians and people with visual impairment. However, the lack of information about service locations often limits their availability and use. To this end, we developed FinderX, a Bluetooth beacon-based system to search for the nearest services and amenities. FinderX identifies the locations of nearby amenities in real-time using the signal from attached beacons. The system does not require Internet or other communication infrastructure and can function where the GPS signal is inaccessible. To demonstrate the feasibility of FinderX, we set up a testbed and evaluated the system in an urban environment. We show that FinderX has adequate usability and feasibility and reduces the time to find the amenities by 18.98\% on average. We also demonstrate that Bluetooth beacons have lower horizontal error compared to GPS in micro-positioning (where semi-indoor or surrounding infrastructure limits signal accessibility), which motivates the use of Bluetooth beacons for such applications. 
    more » « less
  4. The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.

     
    more » « less
  5. Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter. 
    more » « less