Efficient image retrieval by exploiting vertical fragmentation
In content-based retrieval systems, the goal of similarity search is to identify the k most similar images to a given example. Images are represented and queried by high-dimensional feature vectors encoding dominant characteristics like color and texture. We propose an efficient similarity search method that is robust to dimensionality and has optimal space complexity. Our approach fragments the feature vectors vertically, and computes the similarity of all images dimension by dimension. The innovation lies in gradually removing images that cannot participate in the response set. We show how to apply this scheme for two common similarity metrics, namely histogram intersection and euclidean distance. The implementation of our algorithm in Monet illustrates that core database technology supports image retrieval well, without special extensions. Finally, we report the effectiveness of our approach on real and synthetic data sets, and show significant improvements in response time yielded.