skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Lu, Hongsheng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available September 28, 2025
  2. ABSTRACT In Smart City and Vehicle-to-Everything (V2X) systems, acquiring pedestrians’ accurate locations is crucial to traffic and pedestrian safety. Current systems adopt cameras and wireless sensors to estimate people’s locations via sensor fusion. Standard fusion algorithms, however, become inapplicable when multi-modal data is not associated. For example, pedestrians are out of the camera field of view, or data from the camera modality is missing. To address this challenge and produce more accurate location estimations for pedestrians, we propose a localization solution based on a Generative Adversarial Network (GAN) architecture. During training, it learns the underlying linkage between pedestrians’ camera-phone data correspondences. During inference, it generates refined position estimations based only on pedestrians’ phone data that consists of GPS, IMU, and FTM. Results show that our GAN produces 3D coordinates at 1 to 2 meters localization error across 5 different outdoor scenes. We further show that the proposed model supports self-learning. The generated coordinates can be associated with pedestrians’ bounding box coordinates to obtain additional camera-phone data correspondences. This allows automatic data collection during inference. Results show that after fine-tuning the GAN model on the expanded 
    more » « less
  3. We consider a collection of distributed sensor nodes periodically exchanging information to achieve real- time situational awareness in a communication constrained setting, e.g., collaborative sensing amongst vehicles to improve safety-critical decisions. Nodes may be both con- sumers and producers of sensed information. Consumers express interest in information about particular locations, e.g., obstructed regions and/or road intersections, whilst producers broadcast updates on what they are currently able to see. Accordingly, we introduce and explore optimiz- ing trade-offs between the coverage and the space-time in- terest weighted average “age” of the information available to consumers. We consider two settings that capture the fundamental character of the problem. The first addresses selecting a subset of producers that maximizes the cover- age of the consumers preferred regions and minimizes the average age of these regions given that producers provide updates at a fixed rate. The second addresses the mini- mization of the interest weighted average age achieved by a fixed subset of producers with possibly overlapping cov- erage by optimizing their update rates. The first problem is shown to be submodular and thus amenable to greedy op- timization while the second has a non-convex/non-concave cost function which is amenable to effective optimization using the Frank-Wolfe algorithm. Numerical results exhibit the benefits of context dependent optimization information sharing among obstructed sensing nodes. 
    more » « less
  4. Collaborative localization is an essential capability for a team of robots such as connected vehicles to collaboratively estimate object locations from multiple perspectives with reliant cooperation. To enable collaborative localization, four key challenges must be addressed, including modeling complex relationships between observed objects, fusing observations from an arbitrary number of collaborating robots, quantifying localization uncertainty, and addressing latency of robot communications. In this paper, we introduce a novel approach that integrates uncertainty-aware spatiotemporal graph learning and model-based state estimation for a team of robots to collaboratively localize objects. Specifically, we introduce a new uncertainty-aware graph learning model that learns spatiotemporal graphs to represent historical motions of the objects observed by each robot over time and provides uncertainties in object localization. Moreover, we propose a novel method for integrated learning and model-based state estimation, which fuses asynchronous observations obtained from an arbitrary number of robots for collaborative localization. We evaluate our approach in two collaborative object localization scenarios in simulations and on real robots. Experimental results show that our approach outperforms previous methods and achieves state-of-the-art performance on asynchronous collaborative localization. 
    more » « less