skip to main content


Search for: All records

Award ID contains: 2018879

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Connected autonomous vehicles (CAVs) have the potential to deal with the steady increase in road traffic while solving transportation related issues such as traffic congestion, pollution, and road safety. Therefore, CAVs are becoming increasingly popular and viewed as the next generation transportation solution. Although modular advancements have been achieved in the development of CAVs, these efforts are not fully integrated to operationalize CAVs in realistic driving scenarios. This paper surveys a wide range of efforts reported in the literature about the CAV developments, summarizes the CAV impacts from a statistical perspective, explores current state of practice in the field of CAVs in terms of autonomy technologies, communication backbone, and computation needs. Furthermore, this paper provides general guidance on how transportation infrastructures need to be prepared in order to effectively operationalize CAVs. The paper also identifies challenges that need to be addressed in near future for effective and reliable adoption of CAVs.

     
    more » « less
  2. This paper addresses the problem of detecting pedestrians using an enhanced object detection method. In particular, the paper considers the occluded pedestrian detection problem in autonomous driving scenarios where the balance of performance between accuracy and speed is crucial. Existing works focus on learning representations of unique persons independent of body parts semantics. To achieve a real-time performance along with robust detection, we introduce a body parts based pedestrian detection architecture where body parts are fused through a computationally effective constraint optimization technique. We demonstrate that our method significantly improves detection accuracy while adding negligible runtime overhead. We evaluate our method using a real-world dataset. Experimental results show that the proposed method outperforms existing pedestrian detection methods. 
    more » « less
  3. This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection. 
    more » « less
  4. State-of-the-art lane detection methods use a variety of deep learning techniques for lane feature extraction and prediction, demonstrating better performance than conventional lane detectors. However, deep learning approaches are computationally demanding and often fail to meet real-time requirements of autonomous vehicles. This paper proposes a lane detection method using a light-weight convolutional neural network model as a feature extractor exploiting the potential of deep learning while meeting real-time needs. The developed model is trained with a dataset containing small image patches of dimension 16 × 64 pixels and a non-overlapping sliding window approach is employed to achieve fast inference. Then, the predictions are clustered and fitted with a polynomial to model the lane boundaries. The proposed method was tested on the KITTI and Caltech datasets and demonstrated an acceptable performance. We also integrated the detector into the localization and planning system of our autonomous vehicle and runs at 28 fps in a CPU on image resolution of 768 × 1024 meeting real-time requirements needed for self-driving cars. 
    more » « less