2025 "denoising process" Papers

19 papers found

Accelerating Diffusion Sampling via Exploiting Local Transition Coherence

shangwen zhu, Han Zhang, Zhantao Yang et al.

ICCV 2025posterarXiv:2503.09675

BADiff: Bandwidth Adaptive Diffusion Model

Xi Zhang, Hanwei Zhu, Yan Zhong et al.

NeurIPS 2025posterarXiv:2510.21366

Communication-Efficient Diffusion Denoising Parallelization via Reuse-then-Predict Mechanism

Kunyun Wang, Bohan Li, Kai Yu et al.

NeurIPS 2025posterarXiv:2505.14741
1
citations

Diffusion Models are Evolutionary Algorithms

Yanbo Zhang, Benedikt Hartl, Hananel Hazan et al.

ICLR 2025posterarXiv:2410.02543
15
citations

dKV-Cache: The Cache for Diffusion Language Models

Xinyin Ma, Runpeng Yu, Gongfan Fang et al.

NeurIPS 2025posterarXiv:2505.15781
66
citations

DynaGuide: Steering Diffusion Polices with Active Dynamic Guidance

Maximilian Du, Shuran Song

NeurIPS 2025posterarXiv:2506.13922
5
citations

FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute

Sotiris Anagnostidis, Gregor Bachmann, Yeongmin Kim et al.

CVPR 2025highlightarXiv:2502.20126
5
citations

FreeMorph: Tuning-Free Generalized Image Morphing with Diffusion Model

Yukang Cao, Chenyang Si, Jinghao Wang et al.

ICCV 2025posterarXiv:2507.01953
5
citations

Is Your Diffusion Model Actually Denoising?

Daniel Pfrommer, Zehao Dou, Christopher Scarvelis et al.

NeurIPS 2025poster

Make It Count: Text-to-Image Generation with an Accurate Number of Objects

Lital Binyamin, Yoad Tewel, Hilit Segev et al.

CVPR 2025posterarXiv:2406.10210
32
citations

MRO: Enhancing Reasoning in Diffusion Language Models via Multi-Reward Optimization

Chenglong Wang, Yang Gan, Hang Zhou et al.

NeurIPS 2025posterarXiv:2510.21473

Not All Parameters Matter: Masking Diffusion Models for Enhancing Generation Ability

Lei Wang, Senmao Li, Fei Yang et al.

CVPR 2025posterarXiv:2505.03097
2
citations

Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis

Xinyu Hou, Zongsheng Yue, Xiaoming Li et al.

ICCV 2025posterarXiv:2411.17769

OmniCache: A Trajectory-Oriented Global Perspective on Training-Free Cache Reuse for Diffusion Transformer Models

Huanpeng Chu, Wei Wu, Guanyu Feng et al.

ICCV 2025posterarXiv:2508.16212
6
citations

On Efficiency-Effectiveness Trade-off of Diffusion-based Recommenders

Wenyu Mao, Jiancan Wu, Guoqing Hu et al.

NeurIPS 2025oralarXiv:2510.17245

Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning

Maosen Zhao, Pengtao Chen, Chong Yu et al.

CVPR 2025posterarXiv:2505.21591
3
citations

Speculative Jacobi-Denoising Decoding for Accelerating Autoregressive Text-to-image Generation

Yao Teng, Fu-Yun Wang, Xian Liu et al.

NeurIPS 2025posterarXiv:2510.08994

Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising

Mateo Clémente, Leo Brunswic, Yang et al.

NeurIPS 2025posterarXiv:2510.21991
1
citations

VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide

Dohun Lee, Bryan Sangwoo Kim, Geon Yeong Park et al.

CVPR 2025posterarXiv:2410.04364
2
citations