Abstract
Locality sensitive hashing (LSH) is an efficient method for solving the problem of approximate similarity search in high-dimensional spaces. Through LSH, a high-dimensional similarity join can be processed in the same way as hash join,
making the cost of joining two large datasets linear. By judicially analyzing the properties of multiple LSH algorithms, we propose a generic method to accelerate the process of joining two large datasets using LSH. The crux of our method lies in the way we identify a set of representative points to reduce the number of LSH lookups. Theoretical analyses show that our proposed method can greatly reduce the number of lookup operations and retain the same result accuracy compared to executing LSH lookups for every query point. Furthermore, we demonstrate the
generality of our method by showing that the same principle can be applied to LSH algorithms for three different metrics: the Euclidean distance (QALSH), Jaccard similarity measure (MinHash), and Hamming distance (sequence hashing). Results from experimental studies using real datasets confirm our error
analyses and show significant improvements of our method over the state-of-the-art LSH method: to achieve over 0.95 recall, we only need to operate LSH lookups for at most 15% of the query points.
making the cost of joining two large datasets linear. By judicially analyzing the properties of multiple LSH algorithms, we propose a generic method to accelerate the process of joining two large datasets using LSH. The crux of our method lies in the way we identify a set of representative points to reduce the number of LSH lookups. Theoretical analyses show that our proposed method can greatly reduce the number of lookup operations and retain the same result accuracy compared to executing LSH lookups for every query point. Furthermore, we demonstrate the
generality of our method by showing that the same principle can be applied to LSH algorithms for three different metrics: the Euclidean distance (QALSH), Jaccard similarity measure (MinHash), and Hamming distance (sequence hashing). Results from experimental studies using real datasets confirm our error
analyses and show significant improvements of our method over the state-of-the-art LSH method: to achieve over 0.95 recall, we only need to operate LSH lookups for at most 15% of the query points.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - International Conference on Data Engineering |
| Publisher | IEEE Computer Society |
| Pages | 29-30 |
| ISBN (Print) | 9781509065431 |
| DOIs | |
| Publication status | Published - 16 May 2017 |
| Event | 33rd IEEE International Conference on Data Engineering, ICDE 2017 - San Diego, United States Duration: 19 Apr 2017 → 22 Apr 2017 |
Publication series
| Name | |
|---|---|
| ISSN (Print) | 1084-4627 |
Conference
| Conference | 33rd IEEE International Conference on Data Engineering, ICDE 2017 |
|---|---|
| Place | United States |
| City | San Diego |
| Period | 19/04/17 → 22/04/17 |
Fingerprint
Dive into the research topics of 'A generic method for accelerating LSH-based similarity join processing (Extended abstract)'. Together they form a unique fingerprint.Student theses
-
Approximate Similarity Search in High-dimensional Spaces
YU, C. (Author), NUTANONG, S. (Supervisor), 11 Jan 2018Student thesis: Doctoral Thesis