Non-Markovian Discrete Diffusion with Causal Language Models

1citations
1
Citations
#1172
in NeurIPS 2025
of 5858 papers
10
Authors
2
Data Points

Abstract

Discrete diffusion models offer a flexible, controllable approach to structured sequence generation, yet they still lag behind causal language models in expressive power. A key limitation lies in their reliance on the Markovian assumption, which restricts each step to condition only on the current state, leading to potential uncorrectable error accumulation. In this paper, We introduce CaDDi, a discrete diffusion model that conditions on the entire generative trajectory, thereby lifting the Markov constraint and allowing the model to revisit and improve past states. By unifying sequential (causal) and temporal (diffusion) reasoning in a single non‑Markovian transformer, CaDDi also treats standard causal language models as a special case and permits the direct reuse of pretrained LLM weights with no architectural changes. Empirically, CaDDi outperforms state‑of‑the‑art discrete diffusion baselines on natural‑language benchmarks, substantially narrowing the remaining gap to large autoregressive transformers.

Citation History

Jan 26, 2026
1
Jan 27, 2026
1