skip to main content


Search for: All records

Creators/Authors contains: "Kang, G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Understanding and learning the actor-to-X inter-actions (AXIs), such as those between the focal vehicles (actor) and other traffic participants (e.g., other vehicles, pedestrians) as well as traffic environments (e.g., city/road map), is essential for the development of a decision-making model and simulation of autonomous driving (AD). Existing practices on imitation learning (IL) for AD simulation, despite the advances in the model learnability, have not accounted for fusing and differentiating the heterogeneous AXIs in complex road environments. Furthermore, how to further explain the hierarchical structures within the complex AXIs remains largely under-explored. To overcome these challenges, we propose HGIL, an interaction- aware and hierarchically-explainable Heterogeneous _Graph- based Imitation Learning approach for AD simulation. We have designed a novel heterogeneous interaction graph (HIG) to provide local and global representation as well as awareness of the AXIs. Integrating the HIG as the state embeddings, we have designed a hierarchically-explainable generative adversarial imitation learning approach, with local sub-graph and global cross-graph attention, to capture the interaction behaviors and driving decision-making processes. Our data-driven simulation and explanation studies have corroborated the accuracy and explainability of HGIL in learning and capturing the complex AXIs. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  2. Learning the human--mobility interaction (HMI) on interactive scenes (e.g., how a vehicle turns at an intersection in response to traffic lights and other oncoming vehicles) can enhance the safety, efficiency, and resilience of smart mobility systems (e.g., autonomous vehicles) and many other ubiquitous computing applications. Towards the ubiquitous and understandable HMI learning, this paper considers both spoken language (e.g., human textual annotations) and unspoken language (e.g., visual and sensor-based behavioral mobility information related to the HMI scenes) in terms of information modalities from the real-world HMI scenarios. We aim to extract the important but possibly implicit HMI concepts (as the named entities) from the textual annotations (provided by human annotators) through a novel human language and sensor data co-learning design.

    To this end, we propose CG-HMI, a novel Cross-modality Graph fusion approach for extracting important Human-Mobility Interaction concepts from co-learning of textual annotations as well as the visual and behavioral sensor data. In order to fuse both unspoken and spoken languages, we have designed a unified representation called the human--mobility interaction graph (HMIG) for each modality related to the HMI scenes, i.e., textual annotations, visual video frames, and behavioral sensor time-series (e.g., from the on-board or smartphone inertial measurement units). The nodes of the HMIG in these modalities correspond to the textual words (tokenized for ease of processing) related to HMI concepts, the detected traffic participant/environment categories, and the vehicle maneuver behavior types determined from the behavioral sensor time-series. To extract the inter- and intra-modality semantic correspondences and interactions in the HMIG, we have designed a novel graph interaction fusion approach with differentiable pooling-based graph attention. The resulting graph embeddings are then processed to identify and retrieve the HMI concepts within the annotations, which can benefit the downstream human-computer interaction and ubiquitous computing applications. We have developed and implemented CG-HMI into a system prototype, and performed extensive studies upon three real-world HMI datasets (two on car driving and the third one on e-scooter riding). We have corroborated the excellent performance (on average 13.11% higher accuracy than the other baselines in terms of precision, recall, and F1 measure) and effectiveness of CG-HMI in recognizing and extracting the important HMI concepts through cross-modality learning. Our CG-HMI studies also provide real-world implications (e.g., road safety and driving behaviors) about the interactions between the drivers and other traffic participants.

     
    more » « less
    Free, publicly-accessible full text available September 27, 2024
  3. Free, publicly-accessible full text available August 1, 2024
  4. Falls are the second leading cause of accidental or unintentional injuries/deaths worldwide. Accurate pose estimation using commodity mobile devices will help early detection and injury assessment of falls, which are essential for the first aid of elderly falls. By following the definition of fall, we propose a P ervasive P ose Est imation scheme for fall detection ( P \( ^2 \) Est ), which measures changes in tilt angle and height of the human body. For the tilt measurement, P \( ^2 \) Est leverages the pointing of the mobile device, e.g., the smartphone, when unlocking to associate the Device coordinate system with the World coordinate system. For the height measurement, P \( ^2 \) Est exploits the fact that the person’s height remains unchanged while walking to calibrate the pressure difference between the device and the floor. We have prototyped and tested P \( ^2 \) Est in various situations and environments. Our extensive experimental results have demonstrated that P \( ^2 \) Est can track the body orientation irrespective of which pocket the phone is placed in. More importantly, it enables the phone’s barometer to detect falls in various environments with decimeter-level accuracy. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)
    The ability to sense ambient temperature pervasively, albeit crucial for many applications, is not yet available, causing problems such as degraded indoor thermal comfort and unexpected/premature shutoffs of mobile devices. To enable pervasive sensing of ambient temperature, we propose use of mobile device batteries as thermometers based on (i) the fact that people always carry their battery-powered smart phones, and (ii) our empirical finding that the temperature of mobile devices' batteries is highly correlated with that of their operating environment. Specifically, we design and implement Batteries-as-Thermometers (BaT), a temperature sensing service based on the information of mobile device batteries, expanding the ability to sense the device's ambient temperature without requiring additional sensors or taking up the limited on-device space. We have evaluated BaT on 6 Android smartphones using 19 laboratory experiments and 36 real-life field-tests, showing an average of 1.25°C error in sensing the ambient temperature. 
    more » « less