skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on October 1, 2024

Title: Real-Time Learning of Driving Gap Preference for Personalized Adaptive Cruise Control
Advanced Driver Assistance Systems (ADAS) are increasingly important in improving driving safety and comfort, with Adaptive Cruise Control (ACC) being one of the most widely used. However, pre-defined ACC settings may not always align with driver's preferences and habits, leading to discomfort and potential safety issues. Personalized ACC (P-ACC) has been proposed to address this problem, but most existing research uses historical driving data to imitate behaviors that conform to driver preferences, neglecting real-time driver feedback. To bridge this gap, we propose a cloud-vehicle collaborative P-ACC framework that incorporates driver feedback adaptation in real time. The framework is divided into offline and online parts. The offline component records the driver's naturalistic car-following trajectory and uses inverse reinforcement learning (IRL) to train the model on the cloud. In the online component, driver feedback is used to update the driving gap preference in real time. The model is then retrained on the cloud with driver's takeover trajectories, achieving incremental learning to better match driver's preference. Human-in-the-loop (HuiL) simulation experiments demonstrate that our proposed method significantly reduces driver intervention in automatic control systems by up to 62.8%. By incorporating real-time driver feedback, our approach enhances the comfort and safety of P-ACC, providing a personalized and adaptable driving experience.  more » « less
Award ID(s):
2152258
NSF-PAR ID:
10510959
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-3702-0
Page Range / eLocation ID:
4675 to 4682
Format(s):
Medium: X
Location:
Honolulu, Oahu, HI, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Adaptive Cruise Control (ACC) has become increasingly popular in modern vehicles, providing enhanced driving safety, comfort, and fuel efficiency. However, predefined ACC settings may not always align with a driver's preferences, leading to discomfort and possible safety hazards. To address this issue, Personalized ACC (P-ACC) has been studied by scholars. However, existing research mostly relies on historical driving data to imitate driver styles, which ignores real-time feedback from the driver. To overcome this limitation, we propose a cloud-vehicle collaborative P-ACC framework, which integrates real-time driver feedback adaptation. This framework consists of offline and online modules. The offline module records the driver's naturalistic car-following trajectory and uses inverse reinforcement learning (IRL) to train the model on the cloud. The online module utilizes the driver's real-time feedback to update the driving gap preference in real-time using Gaussian process regression (GPR). By retraining the model on the cloud with the driver's takeover trajectories, our approach achieves incremental learning to better match the driver's preference. In human-in-the-loop (HuiL) simulation experiments, the proposed framework results in a significant reduction of driver intervention in automatic control systems, up to 70.9%. 
    more » « less
  2. Reinforcement learning (RL) methods can be used to develop a controller for the heating, ventilation, and air conditioning (HVAC) systems that both saves energy and ensures high occupants’ thermal comfort levels. However, the existing works typically require on-policy data to train an RL agent, and the occupants’ personalized thermal preferences are not considered, which is limited in the real-world scenarios. This paper designs a high-performance model-based offline RL algorithm for personalized HVAC systems. The proposed algorithm can quickly adapt to different occupants’ thermal preferences with a few thermal feedbacks, guaranteeing the high occupants’ personalized thermal comfort levels efficiently. First, we use a meta-supervised learning algorithm to train an occupant's thermal preference model. Then, we train an ensemble neural network to predict the thermal states of the considered zone. In addition, the obtained ensemble networks can indicate the regions in the state and action spaces covered by the offline dataset. With the personalized thermal preference model updated via meta-testing, model-based RL is used to derive the optimal HVAC controller. Since the proposed algorithm only requires offline datasets and a few online thermal feedbacks for training, it contributes to a more practical deployment of the RL algorithm to HVAC systems. We use the ASHRAE database II to verify the effectiveness and advantage of the meta-learning algorithm for modeling different occupants’ thermal preferences. Numerical simulations on the EnergyPlus environment demonstrate that the proposed algorithm can guarantee personalized thermal preferences with a slight increase of power consumption of 1.91% compared with the model-based RL algorithm with on-policy data aggregation. 
    more » « less
  3. null (Ed.)
    As the popularity of online travel platforms increases, users tend to make ad-hoc decisions on places to visit rather than preparing the detailed tour plans in advance. Under the situation of timeliness and uncertainty of users’ demand, how to integrate real-time context into a dynamic and personalized recommendations have become a key issue in travel recommender system. In this paper, by integrating the users’ historical preferences and real-time context, a location-aware recommender system called TRACE (Travel Reinforcement Recommendations Based on Location-Aware Context Extraction) is proposed. It captures users’ features based on location-aware context learning model, and makes dynamic recommendations based on reinforcement learning. Specifically, this research: (1) designs a travel reinforcing recommender system based on an Actor-Critic framework, which can dynamically track the user preference shifts and optimize the recommender system performance; (2) proposes a location-aware context learning model, which aims at extracting user context from real-time location and then calculating the impacts of nearby attractions on users’ preferences; and (3) conducts both offline and online experiments. Our proposed model achieves the best performance in both of the two experiments, which demonstrates that tracking the users’ preference shifts based on real-time location is valuable for improving the recommendation results. 
    more » « less
  4. Driver assist features such as adaptive cruise control (ACC) and highway assistants are becoming increasingly prevalent on commercially available vehicles. These systems are typically designed for safety and rider comfort. However, these systems are often not designed with the quality of the overall traffic flow in mind. For such a system to be beneficial to the traffic flow, it must be string stable and minimize the inter-vehicle spacing to maximize throughput, while still being safe. We propose a methodology to select autonomous driving system parameters that are both safe and string stable using the existing control framework already implemented on commercially available ACC vehicles. Optimal parameter values are selected via model-based optimization for an example highway assistant controller with path planning. 
    more » « less
  5. The Intelligent Driver Model (IDM) is one of the widely used car-following models to represent human drivers in mixed traffic simulations. However, the standard IDM performs too well in energy efficiency and comfort (acceleration) compared with real-world human drivers. In addition, many studies assessed the performance of automated vehicles interacting with human-driven vehicles (HVs) in mixed traffic where IDM serves as HVs based on the assumption that the IDM represents an intelligent human driver that performs not better than automated vehicles (AVs). When a commercially available control system of AVs, Adaptive Cruise Control (ACC), is compared with the standard IDM, it is found that the standard IDM generally outperforms ACC in fuel efficiency and comfort, which is not logical in an evaluation of any advanced control logic with mixed traffic. To ensure the IDM reasonably mimics human drivers, a dynamic safe time headway concept is proposed and evaluated. A real-world NGSIM data set is utilized as the human drivers for simulation-based comparisons. The results indicate that the performance of the IDM with dynamic time headway is much closer to human drivers and worse than the ACC system as expected. 
    more » « less