Deep Semantic-Visual Alignment for zero-shot remote sensing image scene classification

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

7 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Pages (from-to)140-152
Journal / PublicationISPRS Journal of Photogrammetry and Remote Sensing
Online published14 Mar 2023
Publication statusPublished - Apr 2023


Deep neural networks have achieved promising progress in remote sensing (RS) image classification, for which the training process requires abundant samples for each class. However, it is time-consuming and unrealistic to annotate labels for each RS category, given the fact that the RS target database is increasing dynamically. Zero-shot learning (ZSL) allows for identifying novel classes that are not seen during training, which provides a promising solution for the aforementioned problem. However, previous ZSL models mainly depend on manually-labeled attributes or word embeddings extracted from language models to transfer knowledge from seen classes to novel classes. Those class embeddings may not be visually detectable and the annotation process is time-consuming and labor-intensive. Besides, pioneer ZSL models use convolutional neural networks pre-trained on ImageNet, which focus on the main objects appearing in each image, neglecting the background context that also matters in RS scene classification. To address the above problems, we propose to collect visually detectable attributes automatically. We predict attributes for each class by depicting the semantic-visual similarity between attributes and images. In this way, the attribute annotation process is accomplished by machine instead of human as in other methods. Moreover, we propose a Deep Semantic-Visual Alignment (DSVA) that take advantage of the self-attention mechanism in the transformer to associate local image regions together, integrating the background context information for prediction. The DSVA model further utilizes the attribute attention maps to focus on the informative image regions that are essential for knowledge transfer in ZSL, and maps the visual images into attribute space to perform ZSL classification. With extensive experiments, we show that our model outperforms other state-of-the-art models by a large margin on a challenging large-scale RS scene classification benchmark. Moreover, we qualitatively verify that the attributes annotated by our network are both class discriminative and semantic related, which benefits the zero-shot knowledge transfer. © 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V.

Research Area(s)

  • Automatic attribute annotation, Deep semantic-visual alignment model, Remote sensing scene classification, Zero-shot learning