VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix

Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng*, Ran Cheng, Chengguo Yin, Ping Luo

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

34 Citations (Scopus)

Abstract

Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned image-text pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences from the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods. Project page: https://github.com/ttengwang/VLMixer. Copyright © 2022 by the author(s)
Original languageEnglish
Title of host publicationProceedings of the 39th International Conference on Machine Learning
PublisherML Research Press
Pages22680-22690
Publication statusPublished - Jul 2022
Externally publishedYes
Event39th International Conference on Machine Learning (ICML 2022) - Hybrid, Baltimore, United States
Duration: 17 Jul 202223 Jul 2022
https://icml.cc/virtual/2022/index.html
https://icml.cc/Conferences/2022
https://proceedings.mlr.press/v162/

Publication series

NameProceedings of Machine Learning Research
Volume162
ISSN (Print)2640-3498

Conference

Conference39th International Conference on Machine Learning (ICML 2022)
Country/TerritoryUnited States
CityBaltimore
Period17/07/2223/07/22
Internet address

Fingerprint

Dive into the research topics of 'VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix'. Together they form a unique fingerprint.

Cite this