http://www.iaeme.com/IJCET/index.asp
24
editor@iaeme.com
1. INTRODUCTION
Web image retrieval starts with keyword queries and the major challenge is that
ambiguity in query keyword specified by the user. Ambiguity in query keyword
produces noisy image results. For example using cakes as keyword, the image
retrieved will fall on different categories, cakes of different shapes, styles etc. Hence
image re-ranking methodology has been adopted for better search results. It starts
with acceptance of a query keyword and a set of images are retrieved, then the user
selects an image of search intention. The remaining images will be re-ranked
according to the selected image visual features.
The major problem in image re-ranking is images with similar low level visual
features cannot be correlated with the higher level semantic meaning of image
selected by the user. The conventional re-ranking framework is shown in Fig1. For
example if an user search for keyword apple the result may either be green apple,
or red apple . If the users priority is for red apple other images should be re
ranked according to the semantic of image.ie All the images should be similar to red
apple ,no images similar by low visual features and diverse in semantics should be
retrieved. The other challenge is that it is difficult to learn a universal visual semantic
space for characterizing highly diverse images as the size of image database grows.
Visual Similarity
Image
Keywords
Images
Image
Image
Images
img
Query
Re-Ranked results based on
Visual Similarities
http://www.iaeme.com/IJCET/index.asp
25
editor@iaeme.com
Mangai P.
2. RELATED WORK
In Internet image retrieval, images with different number of text keywords are linked
together. Text based image retrieval starts with matching the keywords that are linked
with images in data base and query keywords. The Matched images are retrieved as
results of the user. As the imaged data base grows in size, it is cumbersome for the
technical people to label the images as each images can be in different sector .For
example image of a pen can be labeled as under its type, its company and so on
.Thus each image can have different labels to identify. Content based image retrieval
(CBIR) is introduced with self learning capability to interconnect images .Images that
are retrieved by the search will be based on the content which are labeled manually.
As the images data base increases it become impractical to label all available images
manually. Not only labeling images makes the technique impractical but also
connecting the similar images.
The method of accounting color texture to predict similar images are not considered
efficient as it gives a semantic gap between lower level features and higher level
features (User Intention).Short term learning and long time learning approaches are
used for the reduction of semantic gap between low level features and high level
features. Short term learning helps in learning the list of images relevant with the user
query keywords. On the other hand the long term learning helps in learning
connection between images from the short term learning technique. Though CBIR is
considered more efficient than text based it provides noisy results to user.
One click internet image search method overrides the limitations of CBIR. It
helps in clearing the ambiguity in image retrieval. It combines text based and CBIR
with additional visual features to learn user intention. The major challenge of this
method is that similar visual featured images are not correlated.
http://www.iaeme.com/IJCET/index.asp
26
editor@iaeme.com
Query Specific
Reference
Classes
Classifiers of
Reference
Classes
Semantic
Signatures
Offline
Online
Query
Text Based
Results
Every query keyword will have a multiclass classifiers on low level visual features
stored. Each image in the database will be relevant to different keyword according to
word image index file. Now we have a multiclass classifier of reference class and
word image index file. The visual features of image and reference class of keyword is
compared using classifiers which are stored earlier, and thus the semantic signature is
extracted.
In online stage the user issues the query keyword, and all images in the database
with its own semantic signature will be retrieved first. When the user selects his intent
image, its semantic signature will be compared with semantic signature of other
images and image search result will be re-ranked.
The semantic signature developed has the property to project thousand dimensions
of original visual features in to twenty-five dimensions. Thus it improves the result of
web image re-ranking. It has the advantage that the visual features of similar images
can be correlated to the semantic meaning of image which user selects.
http://www.iaeme.com/IJCET/index.asp
27
editor@iaeme.com
Mangai P.
5. CONCLUSION
The proposed framework improves the method of Web image re-ranking by using
semantic signatures. The semantic signature which is obtained from keywords and
user intent image semantic space are compared with the semantic signature of other
images at online stage to get an efficient image re-ranking. However user experience
is critical for web-based image search. To fully reflect the extent of users satisfaction
a study has to be conducted.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
http://www.iaeme.com/IJCET/index.asp
28
editor@iaeme.com
[14]
[15]
[16]
http://www.iaeme.com/IJCET/index.asp
29
editor@iaeme.com