Abstract
In this article, we present the algorithms and results of our participation in the medical image annotation and retrieval tasks of ImageCLEFmed 2006. We exploit both global features and local features to describe medical images in the annotation task. We examine different kinds global features and extract the most descriptive ones, which effectively capture the intensity, texture and shape characters of the image content, to represent the radiographs. We also evaluate the descriptive power of local features, i.e. local image patches, for medical images. A newly developed spatial pyramid matching algorithm is applied to measure the similarity between images represented by sets of local features. Both descriptors use multi-class SVM to classify the images. The error rate is 17.6% for global description and 18.2% for the local one, which rank sixth and ninth respectively among all the submissions. For the medical image retrieval task, we only use visual features to describe the images. No textual information is considered. Different features are used to describe gray images and color images. Our submission achieves a mean average precision (MAP) of 0.0681, which ranks second in the 11 runs that also only use visual features.
| Original language | English |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 1172 |
| DOIs | |
| Publication status | Published - 2006 |
| Externally published | Yes |
| Event | 2006 Cross Language Evaluation Forum Workshop, CLEF 2006, co-located with the 10th European Conference on Digital Libraries, ECDL 2006 - Alicante, Spain Duration: 20 Sept 2006 → 22 Sept 2006 |
Bibliographical note
Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to [email protected].Research Keywords
- Image annotation
- Image retrieval
- Similarity measure
- Support vector machine