skip to main content


Title: Comparison of Infrastructure- and Onboard Vehicle-Based Sensor Systems in Measuring Operational Safety Assessment (OSA) Metrics

The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.

 
more » « less
Award ID(s):
2137295
NSF-PAR ID:
10495587
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
SAE Technical Paper
Date Published:
Format(s):
Medium: X
Location:
Detroit, Michigan, United States
Sponsoring Org:
National Science Foundation
More Like this
  1. Weather, winds, thermals, and turbulence pose an ever-present challenge to small UAS. These challenges become magnified in rough terrain and especially within urban canyons. As the industry moves towards Beyond Visual Line of Sight (BVLOS) and fully autonomous operations, resilience to weather perturbations will be key. As the human decision-maker is removed from the in-situ environment, producing control systems that are robust will be paramount to the preservation of any Airspace System. Safety requirements and regulations require quantifiable performance metrics to guarantee a safe aerial environment with ever- increasing traffic. In this regards, the effect of wind and weather disturbances on a UAS and its ability to reject these disturbances present some unique concerns. Currently, drone manufacturers and operators rely on outdoor testing during windy days (or in windy locations) and onboard logging to evaluate and improve the flight worthiness, reliability and perturbation rejection capability of their vehicles. Waiting for the desired weather or travelling to a windier location is cost- and time-inefficient. Moreover, the conditions found on outdoor test sites are difficult to quantify and repeatability is non-existent. To address this situation, a novel testing methodology is proposed, combining artificial wind generation thanks to a multi-fan array wind generator (windshaper), coherent GNSS signal generation and accurate tracking of the test subject thanks to motion capture cameras. In this environment, the drone being tested can fly freely, follow missions and experience wind perturbations whilst staying in a modest indoor volume. By coordinating the windshaper, the motion tracking feedback and the position emulated by the GNSS signal generator with the drone’s mission profile, it was demonstrated that outdoor flight conditions can be reliably recreated in a controlled and repeatable environment. Specifically, thanks to real-time update of the position simulated by the GNSS signal generator, it was possible to demonstrate that the drone’s perception of the situation is similar to a corresponding mission being executed outdoor. In this work, the drone was subjected to three distinct flight cases: (1) hover in 2 m s−1 wind, (2) forward flight at 2 m s−1 without wind and (3) forward flight at 2 m s−1 with 2 m s−1 headwind. In each case, it could be demonstrated that by using indoor GNSS signal simulation and wind generation, the drone displays the characteristics of a 20 m move forward, while actually staying stationary in the test volume, within ±1 m. Further development of this methodology opens the door for fully integrated hardware-in- the-loop simulation of drone flight operations. 
    more » « less
  2. To create safer and less congested traffic operating environments researchers at the University of Tennessee at Chattanooga (UTC) and the Georgia Tech Research Institute (GTRI) have fostered a vision of cooperative sensing and cooperative mobility. This vision is realized in a mobile application that combines visual data extracted from cameras on roadway infrastructure with a user’s coordinates via a GPS-enabled device to create a visual representation of the driving or walking environment surrounding the application user. By merging the concepts of computer vision, object detection, and mono-vision image depth calculation, this application is able to gather absolute Global Positioning System (GPS) coordinates from a user’s mobile device and combine them with relative GPS coordinates determined by the infrastructure cameras and determine the position of vehicles and pedestrians without the knowledge of their absolute GPS coordinates. The joined data is then used by an iOS mobile application to display a map showing the location of other entities such as vehicles, pedestrians, and obstacles creating a real-time visual representation of the surrounding area prior to the area appearing in the user’s visual perspective. Furthermore, a feature was implemented to display routing by using the results of a traffic scenario that was analyzed by rerouting algorithms in a simulated environment. By displaying where proximal entities are concentrated and showing recommended optional routes, users have the ability to be more informed and aware when making traffic decisions helping ensure a higher level of overall safety on our roadways. This vision would not be possible without high speed gigabit network infrastructure installed in Chattanooga, Tennessee and UTC’s wireless testbed, which was used to test many functions of this application. This network was required to reduce the latency of the massive amount of data generated by the infrastructure and vehicles that utilize the testbed; having results from this data come back in real-time is a critical component. 
    more » « less
  3. Drone-based last-mile delivery is an emerging technology that uses drones loaded onto a truck to deliver parcels to customers. In this paper, we introduce a fully automated system for drone-based last-mile delivery through incorporation of autonomous vehicles (AVs). A novel problem called the autonomous vehicle routing problem with drones (A-VRPD) is defined. A-VRPD is to select AVs from a pool of available AVs based on crowd sourcing, assign selected AVs to customer groups, and schedule routes for selected AVs to optimize the total operational cost. We formulate A-VRPD as a Mixed Integer Linear Program (MILP) and propose an optimization framework to solve the problem. A greedy algorithm is also developed to significantly improve the running time for large-scale delivery scenarios. Extensive simulations were conducted taking into account real-world operational costs for different types of AVs, traveled distances calculated considering the real-time traffic conditions using Google Map API, and varying load capacities of AVs. We evaluated the performance in comparison with two different state-of-the-art solutions: an algorithm designed to address the traditional vehicle routing problem with drones (VRP-D), which involves human-operated trucks working in tandem with drones to deliver parcels, and an algorithm for the two echelon vehicle routing problem (2E-VRP), wherein parcels are first transported to satellite locations and subsequently delivered from those satellites to the customers. The results indicate a substantial increase in profits for both the delivery company and vehicle owners compared with the state-of-the-art algorithms. 
    more » « less
  4. The traffic congestion hits most big cities in the world - threatening long delays and serious reductions in air quality. City and local government officials continue to face challenges in optimizing crowd flow, synchronizing traffic and mitigating threats or dangerous situations. One of the major challenges faced by city planners and traffic engineers is developing a robust traffic controller that eliminates traffic congestion and imbalanced traffic flow at intersections. Ensuring that traffic moves smoothly and minimizing the waiting time in intersections requires automated vehicle detection techniques for controlling the traffic light automatically, which are still challenging problems. In this paper, we propose an intelligent traffic pattern collection and analysis model, named TPCAM, based on traffic cameras to help in smooth vehicular movement on junctions and set to reduce the traffic congestion. Our traffic detection and pattern analysis model aims at detecting and calculating the traffic flux of vehicles and pedestrians at intersections in real-time. Our system can utilize one camera to capture all the traffic flows in one intersection instead of multiple cameras, which will reduce the infrastructure requirement and potential for easy deployment. We propose a new deep learning model based on YOLOv2 and adapt the model for the traffic detection scenarios. To reduce the network burdens and eliminate the deployment of network backbone at the intersections, we propose to process the traffic video data at the network edge without transmitting the big data back to the cloud. To improve the processing frame rate at the edge, we further propose deep object tracking algorithm leveraging adaptive multi-modal models and make it robust to object occlusions and varying lighting conditions. Based on the deep learning based detection and tracking, we can achieve pseudo-30FPS via adaptive key frame selection. 
    more » « less
  5. The latest developments in vehicle-to-infrastructure (V2I) and vehicle-to-anything (V2X) technologies enable all the entities in the transportation system to communicate and collaborate to optimize transportation safety, mobility, and equity at the system level. On the other hand, the community of researchers and developers is becoming aware of the critical role of roadway infrastructure in realizing automated driving. In particular, intelligent infrastructure systems, which leverage modern sensors, artificial intelligence, and communication capabilities, can provide critical information and control support to connected and/or automated vehicles to fulfill functions that are infeasible for automated vehicles alone due to technical or cost considerations. However, there is limited research on formulating and standardizing the intelligence levels of road infrastructure to facilitate the development, as the SAE automated driving levels have done for automated vehicles. This article proposes a five-level intelligence definition for intelligent roadway infrastructure, namely, connected and automated highway (CAH). The CAH is a subsystem of the more extensive collaborative automated driving system (CADS), along with the connected automated vehicle (CAV) subsystem. Leveraging the intelligence definition of CAH, the intelligence definition for the CADS is also defined. Examples of how the CAH at different levels operates with the CAV in the CADS are also introduced to demonstrate the dynamic allocation of various automated driving tasks between different entities in the CADS.
     
    more » « less