"image generation" Papers

52 papers found • Page 1 of 2

Dynamic Diffusion Transformer

Wangbo Zhao, Yizeng Han, Jiasheng Tang et al.

ICLR 2025posterarXiv:2410.03456
34
citations

Easing Training Process of Rectified Flow Models Via Lengthening Inter-Path Distance

Shifeng Xu, Yanzhu Liu, Adams Kong

ICLR 2025poster
2
citations

End-to-End Multi-Modal Diffusion Mamba

Chunhao Lu, Qiang Lu, Meichen Dong et al.

ICCV 2025posterarXiv:2510.13253
3
citations

Entropic Time Schedulers for Generative Diffusion Models

Dejan Stancevic, Florian Handke, Luca Ambrogioni

NeurIPS 2025posterarXiv:2504.13612
3
citations

Faster Inference of Flow-Based Generative Models via Improved Data-Noise Coupling

Aram Davtyan, Leello Dadi, Volkan Cevher et al.

ICLR 2025poster
5
citations

FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities

Jin Wang, Yao Lai, Aoxue Li et al.

NeurIPS 2025spotlightarXiv:2505.20147
20
citations

GMValuator: Similarity-based Data Valuation for Generative Models

Jiaxi Yang, Wenlong Deng, Benlin Liu et al.

ICLR 2025posterarXiv:2304.10701
2
citations

Inference-Time Scaling for Flow Models via Stochastic Generation and Rollover Budget Forcing

Jaihoon Kim, Taehoon Yoon, Jisung Hwang et al.

NeurIPS 2025posterarXiv:2503.19385
20
citations

Informed Correctors for Discrete Diffusion Models

Yixiu Zhao, Jiaxin Shi, Feng Chen et al.

NeurIPS 2025posterarXiv:2407.21243
31
citations

Learning Diffusion Models with Flexible Representation Guidance

Chenyu Wang, Cai Zhou, Sharut Gupta et al.

NeurIPS 2025posterarXiv:2507.08980
5
citations

MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization

Siyuan Li, Luyuan Zhang, Zedong Wang et al.

CVPR 2025posterarXiv:2504.00999
6
citations

MET3R: Measuring Multi-View Consistency in Generated Images

Mohammad Asim, Christopher Wewer, Thomas Wimmer et al.

CVPR 2025posterarXiv:2501.06336
43
citations

Nested Diffusion Models Using Hierarchical Latent Priors

Xiao Zhang, Ruoxi Jiang, Rebecca Willett et al.

CVPR 2025posterarXiv:2412.05984
1
citations

Not All Parameters Matter: Masking Diffusion Models for Enhancing Generation Ability

Lei Wang, Senmao Li, Fei Yang et al.

CVPR 2025posterarXiv:2505.03097
2
citations

Parallel Sequence Modeling via Generalized Spatial Propagation Network

Hongjun Wang, Wonmin Byeon, Jiarui Xu et al.

CVPR 2025posterarXiv:2501.12381
3
citations

PFDiff: Training-Free Acceleration of Diffusion Models Combining Past and Future Scores

Guangyi Wang, Yuren Cai, lijiang Li et al.

ICLR 2025posterarXiv:2408.08822
5
citations

REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers

Xingjian Leng, Jaskirat Singh, Yunzhong Hou et al.

ICCV 2025posterarXiv:2504.10483
73
citations

Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think

Ge Wu, Shen Zhang, Ruijing Shi et al.

NeurIPS 2025oralarXiv:2507.01467
27
citations

SCoT: Unifying Consistency Models and Rectified Flows via Straight-Consistent Trajectories

zhangkai wu, Xuhui Fan, Hongyu Wu et al.

NeurIPS 2025posterarXiv:2502.16972
1
citations

Simple ReFlow: Improved Techniques for Fast Flow Models

Beomsu Kim, Yu-Guan Hsieh, Michal Klein et al.

ICLR 2025posterarXiv:2410.07815
28
citations

TADA: Improved Diffusion Sampling with Training-free Augmented DynAmics

Tianrong Chen, Huangjie Zheng, David Berthelot et al.

NeurIPS 2025posterarXiv:2506.21757
1
citations

VETA-DiT: Variance-Equalized and Temporally Adaptive Quantization for Efficient 4-bit Diffusion Transformers

Qinkai XU, yijin liu, YangChen et al.

NeurIPS 2025oral

Z-Magic: Zero-shot Multiple Attributes Guided Image Creator

Yingying Deng, Xiangyu He, Fan Tang et al.

CVPR 2025posterarXiv:2503.12124

A Bias-Variance-Covariance Decomposition of Kernel Scores for Generative Models

Sebastian Gregor Gruber, Florian Buettner

ICML 2024poster

Accelerating Parallel Sampling of Diffusion Models

Zhiwei Tang, Jiasheng Tang, Hao Luo et al.

ICML 2024poster

ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank

Zhanjie Zhang, Quanwei Zhang, Wei Xing et al.

AAAI 2024paperarXiv:2312.06135

ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in Artistic Creations

Kailas Vodrahalli, James Zou

ICML 2024poster

Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale

Candi Zheng, Yuan LAN

ICML 2024poster

Completing Visual Objects via Bridging Generation and Segmentation

Xiang Li, Yinpeng Chen, Chung-Ching Lin et al.

ICML 2024poster

Critical windows: non-asymptotic theory for feature emergence in diffusion models

Marvin Li, Sitan Chen

ICML 2024poster

Directly Denoising Diffusion Models

Dan Zhang, Jingjing Wang, Feng Luo

ICML 2024poster

Feedback Efficient Online Fine-Tuning of Diffusion Models

Masatoshi Uehara, Yulai Zhao, Kevin Black et al.

ICML 2024poster

Going beyond Compositions, DDPMs Can Produce Zero-Shot Interpolations

Justin Deschenaux, Igor Krawczuk, Grigorios Chrysos et al.

ICML 2024poster

Image Content Generation with Causal Reasoning

Xiaochuan Li, Baoyu Fan, Run Zhang et al.

AAAI 2024paperarXiv:2312.07132

Kepler codebook

Junrong Lian, Ziyue Dong, Pengxu Wei et al.

ICML 2024poster

LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging

Jinuk Kim, Marwa El Halabi, Mingi Ji et al.

ICML 2024poster

MS$^3$D: A RG Flow-Based Regularization for GAN Training with Limited Data

Jian Wang, Xin Lan, Yuxin Tian et al.

ICML 2024poster

Multi-Architecture Multi-Expert Diffusion Models

Yunsung Lee, Jin-Young Kim, Hyojun Go et al.

AAAI 2024paperarXiv:2306.04990
39
citations

Neural Diffusion Models

Grigory Bartosh, Dmitry Vetrov, Christian Andersson Naesseth

ICML 2024poster

On Inference Stability for Diffusion Models

Viet Nguyen, Giang Vu, Tung Nguyen Thanh et al.

AAAI 2024paperarXiv:2312.12431
2
citations

On the Trajectory Regularity of ODE-based Diffusion Sampling

Defang Chen, Zhenyu Zhou, Can Wang et al.

ICML 2024poster

Operator-Learning-Inspired Modeling of Neural Ordinary Differential Equations

Woojin Cho, Seunghyeon Cho, Hyundong Jin et al.

AAAI 2024paperarXiv:2312.10274

Quantum Implicit Neural Representations

Jiaming Zhao, Wenbo Qiao, Peng Zhang et al.

ICML 2024poster

Reducing Spatial Fitting Error in Distillation of Denoising Diffusion Models

Shengzhe Zhou, Zejian Li, Shengyuan Zhang et al.

AAAI 2024paperarXiv:2311.03830
1
citations

ReGround: Improving Textual and Spatial Grounding at No Cost

Phillip (Yuseung) Lee, Minhyuk Sung

ECCV 2024posterarXiv:2403.13589
4
citations

S2WAT: Image Style Transfer via Hierarchical Vision Transformer Using Strips Window Attention

Chiyu Zhang, Xiaogang Xu, Lei Wang et al.

AAAI 2024paperarXiv:2210.12381
46
citations

Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport

Jaemoo Choi, Jaewoong Choi, Myungjoo Kang

ICML 2024poster

The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling

Jiajun Ma, Shuchen Xue, Tianyang Hu et al.

ICML 2024poster

Trainable Highly-expressive Activation Functions

Irit Chelly, Shahaf Finder, Shira Ifergane et al.

ECCV 2024posterarXiv:2407.07564
8
citations

Turning Waste into Wealth: Leveraging Low-Quality Samples for Enhancing Continuous Conditional Generative Adversarial Networks

AAAI 2024paperarXiv:2308.10273
← PreviousNext →