skip to main content


Title: Two-dimensional active sensing system for bicyclist-motorist crash prediction
This paper develops an active sensing system for a bicycle to accurately track rear vehicles that can have two-dimensional motion. The active sensing system consists of a single-beam laser sensor mounted on a rotationally controlled platform. The sensing system is inexpensive, small, lightweight, consumes low power, and is thus ideally suited for the bicycle application. The rotational orientation of the laser sensor needs to be actively controlled in real-time in order to continue to focus on a rear vehicle, as the vehicle’s lateral and longitudinal distances change. This tracking problem requires controlling the real-time angular position of the laser sensor without knowing the future trajectory of the vehicle. The challenge is addressed using a novel receding horizon framework for active control and an interacting multiple model framework for estimation. The features and benefits of this active sensing system are illustrated first using simulation results. Then, preliminary experimental results are presented using an instrumented bicycle to show the feasibility of the system in tracking rear vehicles during both straight and turning maneuvers.  more » « less
Award ID(s):
1631133
NSF-PAR ID:
10038725
Author(s) / Creator(s):
;
Date Published:
Journal Name:
American Control Conference (ACC), 2017
Page Range / eLocation ID:
2315 to 2320
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper explores the challenges in developing an inexpensive on-bicycle sensing system to track vehicles at a traffic intersection. In particular, opposing traffic with vehicles that can travel straight or turn left are considered. The estimated vehicle trajectories can be used for collision prevention between bicycles and left-turning vehicles. A compact solid-state 2-D low-density Lidar is mounted at the front of a bicycle to obtain distance measurements from vehicles. Vehicle tracking can be achieved by clustering based approaches for assigning measurement points to individual vehicles, introducing a correction term for position measurement refinement, and by exploiting data association and interacting multiple model Kalman filtering approaches for multi-target tracking. The tracking performance of the developed system is evaluated by both simulation and experimental results. Two types of scenarios that involve straight driving and left turning vehicles are considered. Experimental results show that the developed system can successfully track cars in these scenarios accurately in spite of the low measurement density of the sensor. 
    more » « less
  2. Abstract

    Vehicle‐to‐Everything (V2X) communication has been proposed as a potential solution to improve the robustness and safety of autonomous vehicles by improving coordination and removing the barrier of non‐line‐of‐sight sensing. Cooperative Vehicle Safety (CVS) applications are tightly dependent on the reliability of the underneath data system, which can suffer from loss of information due to the inherent issues of their different components, such as sensors' failures or the poor performance of V2X technologies under dense communication channel load. Particularly, information loss affects the target classification module and, subsequently, the safety application performance. To enable reliable and robust CVS systems that mitigate the effect of information loss, a Context‐Aware Target Classification (CA‐TC) module coupled with a hybrid learning‐based predictive modeling technique for CVS systems is proposed. The CA‐TC consists of two modules: a Context‐Aware Map (CAM), and a Hybrid Gaussian Process (HGP) prediction system. Consequently, the vehicle safety applications use the information from the CA‐TC, making them more robust and reliable. The CAM leverages vehicles' path history, road geometry, tracking, and prediction; and the HGP is utilized to provide accurate vehicles' trajectory predictions to compensate for data loss (due to communication congestion) or sensor measurements' inaccuracies. Based on offline real‐world data, a finite bank of driver models that represent the joint dynamics of the vehicle and the drivers' behavior is learned. Offline training and online model updates are combined with on‐the‐fly forecasting to account for new possible driver behaviors. Finally, the framework is validated using simulation and realistic driving scenarios to confirm its potential in enhancing the robustness and reliability of CVS systems.

     
    more » « less
  3. With the trend of vehicles becoming increasingly connected and potentially autonomous, vehicles are being equipped with rich sensing and communication devices. Various vehicular services based on shared real-time sensor data of vehicles from a fleet have been proposed to improve the urban efficiency, e.g., HD-live map, and traffic accident recovery. However, due to the high cost of data uploading (e.g., monthly fees for a cellular network), it would be impractical to make all well-equipped vehicles to upload real-time sensor data constantly. To better utilize these limited uploading resources and achieve an optimal road segment sensing coverage, we present a real-time sensing task scheduling framework, i.e., RISC, for Resource-Constraint modeling for urban sensing by scheduling sensing tasks of commercial vehicles with sensors based on the predictability of vehicles' mobility patterns. In particular, we utilize the commercial vehicles, including taxicabs, buses, and logistics trucks as mobile sensors to sense urban phenomena, e.g., traffic, by using the equipped vehicular sensors, e.g., dash-cam, lidar, automotive radar, etc. We implement RISC on a Chinese city Shenzhen with one-month real-world data from (i) a taxi fleet with 14 thousand vehicles; (ii) a bus fleet with 13 thousand vehicles; (iii) a truck fleet with 4 thousand vehicles. Further, we design an application, i.e., track suspect vehicles (e.g., hit-and-run vehicles), to evaluate the performance of RISC on the urban sensing aspect based on the data from a regular vehicle (i.e., personal car) fleet with 11 thousand vehicles. The evaluation results show that compared to the state-of-the-art solutions, we improved sensing coverage (i.e., the number of road segments covered by sensing vehicles) by 10% on average. 
    more » « less
  4. null (Ed.)
    The Go-CHART is a four-wheel, skid-steer robot that resembles a 1:28 scale standard commercial sedan. It is equipped with an onboard sensor suite and both onboard and external computers that replicate many of the sensing and computation capabilities of a full-size autonomous vehicle. The Go-CHART can autonomously navigate a small-scale traffic testbed, responding to its sensor input wiwithth programmed controllers. Alternatively, it can be remotely driven by a user who views the testbed through the robot's four camera feeds, which facilitates safe, controlled experiments on driver interactions with driverless vehicles. We demonstrate the Go-CHART's ability to perform lane tracking and detection of traffic signs, traffic signals, and other Go-CHARTs in real-time, utilizing an external GPU that runs computationally intensive computer vision and deep learning algorithms. 
    more » « less
  5. The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.

     
    more » « less