TY - GEN
T1 - Medical image annotation and retrieval using visual features
AU - Liu, Jing
AU - Hu, Yang
AU - Li, Mingjing
AU - Ma, Songde
AU - Ma, Wei-Ying
N1 - Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to [email protected].
PY - 2007
Y1 - 2007
N2 - In this paper, we present the algorithms and results of our participation in the medical image annotation and retrieval tasks of ImageCLEFmed 2006. We exploit using global features and local features to describe medical images in the annotation task. Different kinds of global features are examined and the most descriptive ones are extracted to represent the radiographs, which effectively capture the intensity, texture and shape characters of the image content. We also evaluate the descriptive power of local features, i.e. local image patches, for medical images. A newly developed spatial pyramid matching algorithm is applied to measure the similarity between images represented by sets of local features. Both descriptors use multi-class SVM to classify the images. We achieve an error rate of 17.6% for global descriptor and 18.2% for the local one, which rank sixth and ninth respectively among all the submissions. For the medical image retrieval task, we only use visual features to describe the images. No textual information is considered. Different features are used to describe gray images and color images. Our submission achieves a mean average precision (MAP) of 0.0681, which ranks second in the 11 runs that also only use visual features. © Springer-Verlag Berlin Heidelberg 2007.
AB - In this paper, we present the algorithms and results of our participation in the medical image annotation and retrieval tasks of ImageCLEFmed 2006. We exploit using global features and local features to describe medical images in the annotation task. Different kinds of global features are examined and the most descriptive ones are extracted to represent the radiographs, which effectively capture the intensity, texture and shape characters of the image content. We also evaluate the descriptive power of local features, i.e. local image patches, for medical images. A newly developed spatial pyramid matching algorithm is applied to measure the similarity between images represented by sets of local features. Both descriptors use multi-class SVM to classify the images. We achieve an error rate of 17.6% for global descriptor and 18.2% for the local one, which rank sixth and ninth respectively among all the submissions. For the medical image retrieval task, we only use visual features to describe the images. No textual information is considered. Different features are used to describe gray images and color images. Our submission achieves a mean average precision (MAP) of 0.0681, which ranks second in the 11 runs that also only use visual features. © Springer-Verlag Berlin Heidelberg 2007.
UR - http://www.scopus.com/inward/record.url?scp=38049126120&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-38049126120&origin=recordpage
U2 - 10.1007/978-3-540-74999-8_83
DO - 10.1007/978-3-540-74999-8_83
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9783540749981
VL - 4730 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 678
EP - 685
BT - Evaluation of Multilingual and Multi-modal Information Retrieval - 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Revised Selected Papers
PB - Springer Verlag
T2 - 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006
Y2 - 20 September 2006 through 22 September 2006
ER -