After applying my image descriptor, I now have a feature vector for the query image. This ensures that I have a consistent representation of my images. For example, if I used a RGB color histogram with 32 bins per channel when I indexed the images in my dataset, I am going to use the same 32 bin per channel histogram when describing my query image. Now that you have a query image, you need to describe it using the exact same image descriptor(s) as you did in the indexing phase. It’s simple and intuitive to snap a photo of a place, object, or something that interests you using your cellphone, and then have it automatically analyzed and relevant results returned. As image search engines become more prevalent, I suspect that most queries will come from devices such as iPhones and Droids. So what’s the process of actually performing a search? Checkout the outline below: 1. Accept a query image from the userĪ user could be uploading an image from their desktop or from their mobile device. Once you can confirm that your image search engine is working properly, you can then accept external query images that are not already part of your index. It allows you to make sure that your image search engine is functioning as expected. Overall, using an internal query image serves as a sanity check. That would likely imply that there was a bug in my code somewhere or I’ve made some very poor choices in image descriptors and similarity metrics. How strange it would be if I searched for an image already in my index and did not find it in the #1 result position. This makes sense and is the intended behavior. This image would then be placed at the top of my search results since it is the most relevant. If I were to search for an image already in my index, then the Euclidean distance between the two feature vectors would be zero, implying perfect similarity. The Euclidean distance has a nice property called the Coincidence Axiom, implying that the function returns a value of 0 (indicating perfect similarity) if and only if the two feature vectors are identical. Let’s think back to our similarity metrics for a second and assume that we are using the Euclidean distance. You may remember that when I wrote the How-To Guide on Building Your First Image Search engine, I included support for both internal and external queries. We simply apply our image descriptor, extract features, rank the images in our index based on similarity to the query, and return the most relevant results. We have never seen this query image before and we can’t make any assumptions about it. This is the equivalent to typing our text keywords into Google. The second type of query image is an external query image. We have already analyzed it, extracted features from it, and stored its feature vector. Query images come in two flavors: an internal query image and an external query image.Īs the name suggests, an internal query image already belongs in our index. Similarly, when we are building an image search engine, we need a query image. Google then took your query, analyzed it, and compared it to their gigantic index of webpages, ranked them, and returned the most relevant webpages back to you. The last time you went to Google, you typed in some keywords into the search box, right? The text you entered into the input form was your “query”. The Queryīefore we can perform a search, we need a query. Common choices for similarity functions include (but are certainly not limited to) the Euclidean, Manhattan, Cosine, and Chi-Squared distances.įinally, we are now ready to perform our last step in building an image search engine: Searching and Ranking. A distance function should accept two feature vectors and then return a value indicating how “similar” they are. Now, we need to define a method to compare our feature vectors. In Step 1, we defined a method to extract features from an image.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |