skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on June 1, 2026

Title: Partial transport for point-cloud registration
Abstract Point cloud registration is an important task in fields like robotics, computer graphics, and medical imaging, involving the determination of spatial relationships between point sets in 3D space. Real-world challenges, such as non-rigid movements and partial visibility, including occlusions and sensor noise, make non-rigid registration particularly difficult. Traditional methods are often computationally intensive, exhibit unstable performance, and lack strong theoretical guarantees. Recently, the optimal transport problem, including its unbalanced variations like the optimal partial transport problem, has emerged as a powerful tool for point-cloud registration. These methods treat point clouds as empirical measures and provide a mathematically rigorous framework to quantify the correspondence between transformed source and target points. In this paper, we address the non-rigid registration problem using optimal transport theory and introduce a set of non-rigid registration methods based on the optimal partial transportation problem. Additionally, by leveraging efficient solutions to the one-dimensional optimal partial transport problem and extending them via slicing, we achieve significant computational efficiency, resulting in fast and robust registration algorithms. We validate our methods by comparing baselines on various 3D and 2D non-rigid registration problems with noisy point clouds.  more » « less
Award ID(s):
2339898
PAR ID:
10656697
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Birkhäuser
Date Published:
Journal Name:
Sampling Theory, Signal Processing, and Data Analysis
Volume:
23
Issue:
1
ISSN:
2730-5716
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Partial point cloud registration is a challenging problem in robotics, especially when the robot undergoes a large transformation, causing a significant initial pose error and a low overlap between measurements. This letter proposes exploiting equivariant learning from 3D point clouds to improve registration robustness. We propose SE3ET, an SE(3)-equivariant registration framework that employs equivariant point convolution and equivariant transformer designs to learn expressive and robust geometric features. We tested the proposed registration method on indoor and outdoor benchmarks where the point clouds are under arbitrary transformations and lowoverlapping ratios.We also provide generalization tests and run-time performance. 
    more » « less
  2. The integration of structure from motion (SFM) and unmanned aerial vehicle (UAV) technologies has allowed for the generation of very high-resolution three-dimensional (3D) point cloud data (up to millimeters) to detect and monitor surface changes. However, a bottleneck still exists in accurately and rapidly registering the point clouds at different times. The existing point cloud registration algorithms, such as the Iterative Closest Point (ICP) and the Fast Global Registration (FGR) method, were mainly developed for the registration of small and static point cloud data, and do not perform well when dealing with large point cloud data with potential changes over time. In particular, registering large data is computationally expensive, and the inclusion of changing objects reduces the accuracy of the registration. In this paper, we develop an AI-based workflow to ensure high-quality registration of the point clouds generated using UAV-collected photos. We first detect stable objects from the ortho-photo produced by the same set of UAV-collected photos to segment the point clouds of these objects. Registration is then performed only on the partial data with these stable objects. The application of this workflow using the UAV data collected from three erosion plots at the East Tennessee Research and Education Center indicates that our workflow outperforms the existing algorithms in both computational speed and accuracy. This AI-based workflow significantly improves computational efficiency and avoids the impact of changing objects for the registration of large point cloud data. 
    more » « less
  3. null (Ed.)
    We present MultiBodySync, a novel, end-to-end trainable multi-body motion segmentation and rigid registration framework for multiple input 3D point clouds. The two non-trivial challenges posed by this multi-scan multibody setting that we investigate are: (i) guaranteeing correspondence and segmentation consistency across multiple input point clouds capturing different spatial arrangements of bodies or body parts; and (ii) obtaining robust motion-based rigid body segmentation applicable to novel object categories. We propose an approach to address these issues that incorporates spectral synchronization into an iterative deep declarative network, so as to simultaneously recover consistent correspondences as well as motion segmentation. At the same time, by explicitly disentangling the correspondence and motion segmentation estimation modules, we achieve strong generalizability across different object categories. Our extensive evaluations demonstrate that our method is effective on various datasets ranging from rigid parts in articulated objects to individually moving objects in a 3D scene, be it single-view or full point clouds. 
    more » « less
  4. 3D representations of geographical surfaces in the form of dense point clouds can be a valuable tool for documenting and reconstructing a structural collapse, such as the 2021 Champlain Towers Condominium collapse in Surfside, Florida. Point cloud data reconstructed from aerial footage taken by uncrewed aerial systems at frequent intervals from a dynamic search and rescue scene poses significant challenges. Properly aligning large point clouds in this context, or registering them, poses noteworthy issues as they capture multiple regions whose geometries change over time. These regions denote dynamic features such as excavation machinery, cones marking boundaries and the structural collapse rubble itself. In this paper, the performances of commonly used point cloud registration methods for dynamic scenes present in the raw data are studied. The use of Iterative Closest Point (ICP), Rigid - Coherent Point Drift (CPD) and PointNetLK for registering dense point clouds, reconstructed sequentially over a time- frame of five days, is studied and evaluated. All methods are compared by error in performance, execution time, and robustness with a concluding analysis and a judgement of the preeminent method for the specific data at hand is provided. 
    more » « less
  5. 3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate pointlevel labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed. 
    more » « less