This content will become publicly available on April 30, 2025
- Award ID(s):
- 2309570
- PAR ID:
- 10509363
- Editor(s):
- Kontorovich, Aryeh
- Publisher / Repository:
- Electronic publishing (volume 25, number 121)
- Date Published:
- Journal Name:
- Journal of machine learning research
- Edition / Version:
- NA
- Volume:
- 25
- Issue:
- 121
- ISSN:
- 1533-7928
- Page Range / eLocation ID:
- 1-53
- Subject(s) / Keyword(s):
- linear metric learning, Mahalanobis distance, positive semi-definite matrix, low-rank metric learning
- Format(s):
- Medium: X Size: 1MB Other: pdf
- Size(s):
- 1MB
- Sponsoring Org:
- National Science Foundation
More Like this
-
In linear distance metric learning, we are given data in one Euclidean metric space and the goal is to find an appropriate linear map to another Euclidean metric space which respects certain distance conditions as much as possible. In this paper, we formalize a simple and elegant method which reduces to a general continuous convex loss optimization problem, and for different noise models we derive the corresponding loss functions. We show that even if the data is noisy, the ground truth linear metric can be learned with any precision provided access to enough samples, and we provide a corresponding sample complexity bound. Moreover, we present an effective way to truncate the learned model to a low-rank model that can provably maintain the accuracy in the loss function and in parameters – the first such results of this type. Several experimental observations on synthetic and real data sets support and inform our theoretical results.more » « less
-
In linear distance metric learning, we are given data in one Euclidean metric space and the goal is to find an appropriate linear map to another Euclidean metric space which respects certain distance conditions as much as possible. In this paper, we formalize a simple and elegant method which reduces to a general continuous convex loss optimization problem, and for different noise models we derive the corresponding loss functions. We show that even if the data is noisy, the ground truth linear metric can be learned with any precision provided access to enough samples, and we provide a corresponding sample complexity bound. Moreover, we present an effective way to truncate the learned model to a low-rank model that can provably maintain the accuracy in the loss function and in parameters -- the first such results of this type. Several experimental observations on synthetic and real data sets support and inform our theoretical results.more » « less
-
Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning.more » « less
-
Given a matrix D describing the pairwise dissimilarities of a data set, a common task is to embed the data points into Euclidean space. The classical multidimensional scaling (cMDS) algorithm is a widespread method to do this. However, theoretical analysis of the robustness of the algorithm and an in-depth analysis of its performance on non-Euclidean metrics is lacking. In this paper, we derive a formula, based on the eigenvalues of a matrix obtained from D, for the Frobenius norm of the difference between D and the metric Dcmds returned by cMDS. This error analysis leads us to the conclusion that when the derived matrix has a significant number of negative eigenvalues, then ∥D−Dcmds∥F, after initially decreasing, willeventually increase as we increase the dimension. Hence, counterintuitively, the quality of the embedding degrades as we increase the dimension. We empirically verify that the Frobenius norm increases as we increase the dimension for a variety of non-Euclidean metrics. We also show on several benchmark datasets that this degradation in the embedding results in the classification accuracy of both simple (e.g., 1-nearest neighbor) and complex (e.g., multi-layer neural nets) classifiers decreasing as we increase the embedding dimension.Finally, our analysis leads us to a new efficiently computable algorithm that returns a matrix Dl that is at least as close to the original distances as Dt (the Euclidean metric closest in ℓ2 distance). While Dl is not metric, when given as input to cMDS instead of D, it empirically results in solutions whose distance to D does not increase when we increase the dimension and the classification accuracy degrades less than the cMDS solution.more » « less
-
Abstract A cognitive map is an internal representation of the external world that guides flexible behavior in a complex environment. Cognitive map theory assumes that relationships between entities can be organized using Euclidean‐based coordinates. Previous studies revealed that cognitive map theory can also be generalized to inferences about abstract spaces, such as social spaces. However, it is still unclear whether humans can construct a cognitive map by combining relational knowledge between discrete entities with multiple abstract dimensions in nonsocial spaces. Here we asked subjects to learn to navigate a novel object space defined by two feature dimensions, price and abstraction. The subjects first learned the rank relationships between objects in each feature dimension and then completed a transitive inferences task. We recorded brain activity using functional magnetic resonance imaging (fMRI) while they performed the transitive inference task. By analyzing the behavioral data, we found that the Euclidean distance between objects had a significant effect on response time (RT). The longer the one‐dimensional rank distance and two‐dimensional (2D) Euclidean distance between objects the shorter the RT. The task‐fMRI data were analyzed using both univariate analysis and representational similarity analysis. We found that the hippocampus, entorhinal cortex, and medial orbitofrontal cortex were able to represent the Euclidean distance between objects in 2D space. Our findings suggest that relationship inferences between discrete objects can be made in a 2D nonsocial space and that the neural basis of this inference is related to cognitive maps.