FreeDiff : Progressive Frequency Truncation for Image Editing with Diffusion Models

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Author(s)

  • Qingnan Fan
  • Shuai Qin
  • Hong Gu
  • Ruoyu Zhao

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2024 - 18th European Conference, Proceedings
EditorsAleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, Gül Varol
PublisherSpringer, Cham
Pages194-209
VolumePart V
ISBN (electronic)9783031726521
ISBN (print)9783031726514
Publication statusPublished - 2025

Publication series

NameLecture Notes in Computer Science
Volume15063
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Conference

Title18th European Conference on Computer Vision (ECCV 2024)
LocationMiCo Milano
PlaceItaly
CityMilan
Period29 September - 4 October 2024

Abstract

Precise image editing with text-to-image models has attracted increasing interest due to their remarkable generative capabilities and user-friendly nature. However, such attempts face the pivotal challenge of misalignment between the intended precise editing target regions and the broader area impacted by the guidance in practice. Despite excellent methods leveraging attention mechanisms that have been developed to refine the editing guidance, these approaches necessitate modifications through complex network architecture and are limited to specific editing tasks. In this work, we re-examine the diffusion process and misalignment problem from a frequency perspective, revealing that, due to the power law of natural images and the decaying noise schedule, the denoising network primarily recovers low-frequency image components during the earlier timesteps and thus brings excessive low-frequency signals for editing. Leveraging this insight, we introduce a novel fine-tuning free approach that employs progressive Frequency truncation to refine the guidance of Diffusion models for universal editing tasks (FreeDiff). Our method achieves comparable results with state-of-the-art methods across a variety of editing tasks and on a diverse set of images, highlighting its potential as a versatile tool in image editing applications. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

Research Area(s)

  • Diffusion Models, Frequency Truncation, Image Editing

Citation Format(s)

FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models. / Wu, Wei; Fan, Qingnan; Qin, Shuai et al.
Computer Vision – ECCV 2024 - 18th European Conference, Proceedings. ed. / Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol. Vol. Part V Springer, Cham, 2025. p. 194-209 (Lecture Notes in Computer Science; Vol. 15063).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review