(Un)likelihood Training for Interpretable Embedding
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Journal / Publication | ACM Transactions on Information Systems |
Volume | 42 |
Issue number | 3 |
Online published | Dec 2023 |
Publication status | Published - May 2024 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85184804381&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(914ff0e8-8cb5-4207-accf-a5a6cf78d63d).html |
Abstract
Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Research Area(s)
- cross-modal representation learning, ad-hoc video search, Explainable embedding
Citation Format(s)
(Un)likelihood Training for Interpretable Embedding. / WU, Jiaxin; NGO, Chong-Wah; CHAN, Wing-Kwong et al.
In: ACM Transactions on Information Systems, Vol. 42, No. 3, 05.2024.
In: ACM Transactions on Information Systems, Vol. 42, No. 3, 05.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available