In this work, we formulate and solve a new type of approximate nearest neighbor search (ANNS) problems called ANNS after linear transformation (ALT). In ANNSALT, we search for the vector (in a dataset) that, after being linearly transformed by a userspecified query matrix, is closest to a query vector. It is a very general mother problem in the sense that a wide range of baby ANNS problems that have important applications in databases and machine learning can be reduced to and solved as ANNSALT, or its dual that we call ANNSALTD. We propose a novel and computationally efficient solution, called ONe Index for All Kernels (ONIAK), to ANNSALT and all its baby problems when the data dimension d is not too large (say d ≤ 200). In ONIAK, a universal index is built, once and for all, for answering all future ANNSALT queries that can have distinct query matrices. We show by experiments that, when d is not too large, ONIAK has better query performance than linear scan on the mother problem (of ANNSALT), and has query performances comparable to those of the stateoftheart solutions on the baby problems. However, the algorithmic technique behind this universal index approach suffers from a socalled dimension blowup problem that can make the indexing time prohibitively long for a large dataset. We propose a novel algorithmic technique, called fast GOE quadratic form (FGoeQF), that completely solves the (prohibitively long indexing time) fallout of the dimension blowup problem. We also propose a JohnsonLindenstrauss transform (JLT) based ANNSALT (and ANNSALTD) solution that significantly outperforms any competitor when d is large.
more »
« less
ONe Index for All Kernels (ONIAK): A Zero ReIndexing LSH Solution to ANNSALT (After Linear Transformation)
In this work, we formulate and solve a new type of approximate nearest neighbor search (ANNS) problems called ANNS after linear transformation (ALT). In ANNSALT, we search for the vector (in a dataset) that, after being linearly transformed by a userspecified query matrix, is closest to a query vector. It is a very general mother problem in the sense that a wide range of baby ANNS problems that have important applications in databases and machine learning can be reduced to and solved as ANNSALT, or its dual that we call ANNSALTD. We propose a novel and computationally efficient solution, called ONe Index for All Kernels (ONIAK), to ANNSALT and all its baby problems when the data dimension 𝑑 is not too large (say 𝑑 ≤ 200). In ONIAK, a universal index is built, once and for all, for answering all future ANNSALT queries that can have distinct query matrices. We show by experiments that, when 𝑑 is not too large, ONIAK has better query performance than linear scan on the mother problem (of ANNSALT), and has query performances comparable to those of the stateoftheart solutions on the baby problems. However, the algorithmic technique behind this universal index approach suffers from a socalled dimension blowup problem that can make the indexing time prohibitively long for a large dataset. We propose a novel algorithmic technique, called fast GOE quadratic form (FGoeQF), that completely solves the (prohibitively long indexing time) fallout of the dimension blowup problem. We also propose a JohnsonLindenstrauss transform (JLT) based ANNS ALT (and ANNSALTD) solution that significantly outperforms any competitor when 𝑑 is large.
more »
« less
 NSFPAR ID:
 10378403
 Date Published:
 Journal Name:
 Proceedings of the VLDB Endowment
 Volume:
 15
 Issue:
 13
 ISSN:
 21508097
 Page Range / eLocation ID:
 3937  3949
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


We study the framework of universal dynamic regret minimization with strongly convex losses. We answer an open problem in Baby and Wang 2021 by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of 𝑂̃ (𝑑1/3𝑛1/3TV[𝑢1:𝑛]2/3∨𝑑) against any comparator sequence 𝑢1,…,𝑢𝑛 simultaneously, where 𝑛 is the time horizon and TV[𝑢1:𝑛] is the Total Variation of comparator. These results are facilitated by exploiting a number of new structures imposed by the KKT conditions that were not considered in Baby and Wang 2021 which also lead to other improvements over their results such as: (a) handling nonsmooth losses and (b) improving the dimension dependence on regret. Further, we also derive near optimal dynamic regret rates for the special case of proper online learning with expconcave losses and an 𝐿∞ constrained decision set.more » « less

Given a collection of vectors, the approximate Knearestneighbor graph (KGraph for short) connects every vector to its approximate Knearestneighbors (KNN for short). KGraph plays an important role in high dimensional data visualization, semantic search, manifold learning, and machine learning. The vectors are typically vector representations of realworld objects (e.g., images and documents), which often come with a few structured attributes, such as timestamps and locations. In this paper, we study the allrange approximate Knearestneighbor graph (ARKGraph) problem. Specifically, given a collection of vectors, each associated with a numerical search key (e.g., a timestamp), we aim to build an index that takes a search key range as the query and returns the KGraph of vectors whose search keys are within the query range. ARKGraph can facilitate interactive high dimensional data visualization, data mining, etc. A key challenge of this problem is the huge index size. This is because, given n vectors, a bruteforce index stores a KGraph for every search key range, which results in O (K n 3 ) index size as there are O ( n 2 ) search key ranges and each KGraph takes O (K n ) space. We observe that the KNN of a vector in nearby ranges are often the same, which can be grouped together to save space. Based on this observation, we propose a series of novel techniques that reduce the index size significantly to just O (K n log n ) in the average case. Furthermore, we develop an efficient indexing algorithm that constructs the optimized ARKGraph index directly without exhaustively calculating the distance between every pair of vectors. To process a query, for each vector in the query range, we only need O (log log n + K log K) to restore its KNN in the query range from the optimized ARKGraph index. We conducted extensive experiments on realworld datasets. Experimental results show that our optimized ARKGraph index achieved a small index size, low query latency, and good scalability. Specifically, our approach was 1000x faster than the baseline method that builds a KGraph for all the vectors in the query range onthefly.more » « less

Random dimensionality reduction is a versatile tool for speeding up algorithms for highdimensional problems. We study its application to two clustering problems: the facility location problem, and the singlelinkage hierarchical clustering problem, which is equivalent to computing the minimum spanning tree. We show that if we project the input pointset 𝑋 onto a random 𝑑=𝑂(𝑑𝑋)dimensional subspace (where 𝑑𝑋 is the doubling dimension of 𝑋), then the optimum facility location cost in the projected space approximates the original cost up to a constant factor. We show an analogous statement for minimum spanning tree, but with the dimension 𝑑 having an extra loglog𝑛 term and the approximation factor being arbitrarily close to 1. Furthermore, we extend these results to approximating solutions instead of just their costs. Lastly, we provide experimental results to validate the quality of solutions and the speedup due to the dimensionality reduction. Unlike several previous papers studying this approach in the context of 𝑘means and 𝑘medians, our dimension bound does not depend on the number of clusters but only on the intrinsic dimensionality of 𝑋.more » « less

null (Ed.)We introduce a new technique for indexing joins in encrypted SQL databases called partially precomputed joins which achieves lower leakage and bandwidth than those used in prior constructions. These techniques are incorporated into stateoftheart structured encryption schemes for SQL data, yielding a hybrid indexing scheme with both partially and fully precomputed join indexes. We then introduce the idea of leakageaware query planning by giving a heuristic that helps the client decide, at query time, which index to use so as to minimize leakage and stay below a given bandwidth budget. We conclude by simulating our constructions on real datasets, showing that our heuristic is accurate and that partiallyprecomputed joins perform well in practice.more » « less