Exposing fake images generated by text-to-image diffusion models

Qiang Xu*, Hao Wang, Laijin Meng, Zhongjie Mi, Jianye Yuan, Hong Yan

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Citations (Scopus)

Abstract

Text-to-image diffusion models (DM) have posed unprecedented challenges to the authenticity and integrity of digital images, which makes the detection of computer-generated images one of the most important image forensics techniques. However, the detection of images generated by text-to-image diffusion models is rarely reported in the literature. To tackle this issue, we first analyze the acquisition process of DM images. Then, we construct a hybrid neural network based on attention-guided feature extraction (AGFE) and vision transformers (ViTs)-based feature extraction (ViTFE) modules. An attention mechanism is adopted in the AGFE module to capture long-range feature interactions and boost the representation capability. ViTFE module containing sequential MobileNetv2 block (MNV2) and MobileViT blocks are designed to learn global representations. By conducting extensive experiments on different types of generated images, the results demonstrate the effectiveness and robustness of our method in exposing fake images generated by text-to-image diffusion models. © 2023 Elsevier B.V.
Original languageEnglish
Pages (from-to)76-82
JournalPattern Recognition Letters
Volume176
Online published28 Oct 2023
DOIs
Publication statusPublished - Dec 2023

Research Keywords

  • Attention mechanism
  • Diffusion models (DM)
  • Image forensics
  • Text-to-image
  • Vision transformers (ViTs)

Fingerprint

Dive into the research topics of 'Exposing fake images generated by text-to-image diffusion models'. Together they form a unique fingerprint.

Cite this