Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Point cloud registration is an important task in fields like robotics, computer graphics, and medical imaging, involving the determination of spatial relationships between point sets in 3D space. Real-world challenges, such as non-rigid movements and partial visibility, including occlusions and sensor noise, make non-rigid registration particularly difficult. Traditional methods are often computationally intensive, exhibit unstable performance, and lack strong theoretical guarantees. Recently, the optimal transport problem, including its unbalanced variations like the optimal partial transport problem, has emerged as a powerful tool for point-cloud registration. These methods treat point clouds as empirical measures and provide a mathematically rigorous framework to quantify the correspondence between transformed source and target points. In this paper, we address the non-rigid registration problem using optimal transport theory and introduce a set of non-rigid registration methods based on the optimal partial transportation problem. Additionally, by leveraging efficient solutions to the one-dimensional optimal partial transport problem and extending them via slicing, we achieve significant computational efficiency, resulting in fast and robust registration algorithms. We validate our methods by comparing baselines on various 3D and 2D non-rigid registration problems with noisy point clouds.more » « lessFree, publicly-accessible full text available June 1, 2026
-
Free, publicly-accessible full text available July 13, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
Comparing spherical probability distributions is of great interest in various fields, including geology, medical domains, computer vision, and deep representation learning. The utility of optimal transport-based distances, such as the Wasserstein distance, for comparing probability measures has spurred active research in developing computationally efficient variations of these distances for spherical probability measures. This paper introduces a high-speed and highly parallelizable distance for comparing spherical measures using the stereographic projection and the generalized Radon transform, which we refer to as the Stereographic Spherical Sliced Wasserstein (S3W) distance. We carefully address the distance distortion caused by the stereographic projection and provide an extensive theoretical analysis of our proposed metric and its rotationally invariant variation. Finally, we evaluate the performance of the proposed metrics and compare them with recent baselines in terms of both speed and accuracy through a wide range of numerical studies, including gradient flows and self-supervised learning. Our code is available at https://github.com/mint-vu/s3wd.more » « less
-
Continual learning has gained substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning. Yet, a largely unexplored facet of this paradigm is its susceptibility to adversarial attacks, especially with the aim of inducing forgetting. In this paper, we introduce “Brain-Wash,” a novel data poisoning method tailored to impose forgetting on a continual learner. By adding the Brain-Wash noise to a variety of baselines, we demonstrate how a trained continual learner can be induced to forget its previously learned tasks catastrophically, even when using these continual learning baselines. An important feature of our approach is that the attacker requires no access to previous tasks' data and is armed merely with the model's current parameters and the data belonging to the most recent task. Our extensive experiments highlight the efficacy of Brain Wash, showcasing degradation in performance across various regularization and memory replay-based continual learning methods. Our code is available here: https://github.com/mint-vuIBrainwashmore » « less
-
ine-tuning Large Language Models (LLMs) and storing them for each downstream task or domain is impractical because of the massive model size (e.g., 350GB in GPT-3). Current literature, such as LoRA, showcases the potential of low-rank modifications to the original weights of an LLM, enabling efficient adaptation and storage for task-specific models. These methods can reduce the number of parameters needed to fine-tune an LLM by several orders of magnitude. Yet, these methods face two primary limitations: (1) the parameter count is lower-bounded by the rank one decomposition, and (2) the extent of reduction is heavily influenced by both the model architecture and the chosen rank. We introduce NOLA, which overcomes the rank one lower bound present in LoRA. It achieves this by re-parameterizing the low-rank matrices in LoRA using linear combinations of randomly generated matrices (basis) and optimizing the linear mixture coefficients only. This approach allows us to decouple the number of trainable parameters from both the choice of rank and the network architecture. We present adaptation results using GPT-2, LLaMA-2, and ViT in natural language and computer vision tasks. NOLA performs as well as LoRA models with much fewer number of parameters compared to LoRA with rank one, the best compression LoRA can archive. Particularly, on LLaMA-2 70B, our method is almost 20 times more compact than the most compressed LoRA without degradation in accuracy. Our code is available here: https://github.com/UCDvision/NOLAmore » « less
An official website of the United States government

Full Text Available