TY - GEN
T1 - Beyond Generation
T2 - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2025)
AU - Zhong, Nan
AU - Chen, Haoyu
AU - Xu, Yiran
AU - Qian, Zhenxing
AU - Zhang, Xinpeng
PY - 2025
Y1 - 2025
N2 - The prevalence of AI-generated images has evoked concerns regarding the potential misuse of image generation technologies. In response, numerous detection methods aim to identify AI-generated images by analyzing generative artifacts. Unfortunately, most detectors quickly become obsolete with the development of generative models. In this paper, we first design a low-level feature extractor that transforms spatial images into feature space, where different source images exhibit distinct distributions. The pretext task for the feature extractor is to distinguish between images that differ only at the pixel level. This image set comprises the original image as well as versions that have been subjected to varying levels of noise and subsequently denoised using a pre-trained diffusion model. We employ the diffusion model as a denoising tool rather than an image generation tool. Then, we frame the AI-generated image detection task as a one-class classification. We estimate the low-level intrinsic feature distribution of real photographic images and identify features that deviate from this distribution as indicators of AI-generated images. We evaluate our method against over 20 different generative models, including those in GenImage and DRCT-2M datasets. Extensive experiments demonstrate its effectiveness on AI-generated images produced not only by diffusion models but also by GANs, flow-based models, and their variants. © 2025 IEEE
AB - The prevalence of AI-generated images has evoked concerns regarding the potential misuse of image generation technologies. In response, numerous detection methods aim to identify AI-generated images by analyzing generative artifacts. Unfortunately, most detectors quickly become obsolete with the development of generative models. In this paper, we first design a low-level feature extractor that transforms spatial images into feature space, where different source images exhibit distinct distributions. The pretext task for the feature extractor is to distinguish between images that differ only at the pixel level. This image set comprises the original image as well as versions that have been subjected to varying levels of noise and subsequently denoised using a pre-trained diffusion model. We employ the diffusion model as a denoising tool rather than an image generation tool. Then, we frame the AI-generated image detection task as a one-class classification. We estimate the low-level intrinsic feature distribution of real photographic images and identify features that deviate from this distribution as indicators of AI-generated images. We evaluate our method against over 20 different generative models, including those in GenImage and DRCT-2M datasets. Extensive experiments demonstrate its effectiveness on AI-generated images produced not only by diffusion models but also by GANs, flow-based models, and their variants. © 2025 IEEE
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001601106700192
U2 - 10.1109/CVPR52734.2025.00773
DO - 10.1109/CVPR52734.2025.00773
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 979-8-3315-4365-5
SP - 8258
EP - 8268
BT - Proceedings - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PB - IEEE
Y2 - 11 June 2025 through 15 June 2025
ER -