2025 "text generation" Papers
16 papers found
Beyond Autoregression: Fast LLMs via Self-Distillation Through Time
Justin Deschenaux, Caglar Gulcehre
ICLR 2025posterarXiv:2410.21035
25
citations
Chunk-Distilled Language Modeling
Yanhong Li, Karen Livescu, Jiawei Zhou
ICLR 2025posterarXiv:2501.00343
3
citations
Concept Bottleneck Large Language Models
Chung-En Sun, Tuomas Oikarinen, Berk Ustun et al.
ICLR 2025posterarXiv:2412.07992
22
citations
Copyright-Protected Language Generation via Adaptive Model Fusion
Javier Abad, Konstantin Donhauser, Francesco Pinto et al.
ICLR 2025posterarXiv:2412.06619
3
citations
d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning
Siyan Zhao, Devaansh Gupta, Qinqing Zheng et al.
NeurIPS 2025spotlightarXiv:2504.12216
75
citations
Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms
Yinuo Ren, Haoxuan Chen, Yuchen Zhu et al.
NeurIPS 2025posterarXiv:2502.00234
29
citations
HaDeMiF: Hallucination Detection and Mitigation in Large Language Models
Xiaoling Zhou, Mingjie Zhang, Zhemg Lee et al.
ICLR 2025poster
9
citations
Informed Correctors for Discrete Diffusion Models
Yixiu Zhao, Jiaxin Shi, Feng Chen et al.
NeurIPS 2025posterarXiv:2407.21243
31
citations
Iterative Foundation Model Fine-Tuning on Multiple Rewards
Pouya M. Ghari, simone sciabola, Ye Wang
NeurIPS 2025posterarXiv:2511.00220
Mixture of Inputs: Text Generation Beyond Discrete Token Sampling
Yufan Zhuang, Liyuan Liu, Chandan Singh et al.
NeurIPS 2025poster
Next Semantic Scale Prediction via Hierarchical Diffusion Language Models
Cai Zhou, Chenyu Wang, Dinghuai Zhang et al.
NeurIPS 2025poster
3
citations
Optimal Control for Transformer Architectures: Enhancing Generalization, Robustness and Efficiency
Kelvin Kan, Xingjian Li, Benjamin Zhang et al.
NeurIPS 2025posterarXiv:2505.13499
3
citations
Scaling up Masked Diffusion Models on Text
Shen Nie, Fengqi Zhu, Chao Du et al.
ICLR 2025oralarXiv:2410.18514
110
citations
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
Heming Xia, Yongqi Li, Jun Zhang et al.
ICLR 2025posterarXiv:2410.06916
39
citations
Theoretical Benefit and Limitation of Diffusion Language Model
Guhao Feng, Yihan Geng, Jian Guan et al.
NeurIPS 2025posterarXiv:2502.09622
27
citations
Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs
Minh Nguyen, Andrew Baker, Clement Neo et al.
ICLR 2025posterarXiv:2407.01082
82
citations