by Xunhao Lai Papers
3 papers found
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?
Xi Chen, Kaituo Feng, Changsheng Li et al.
NeurIPS 2025poster
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
Xunhao Lai, Jianqiao Lu, Yao Luo et al.
ICLR 2025posterarXiv:2502.20766
51
citations
Model Merging in Pre-training of Large Language Models
Yunshui Li, Yiyuan Ma, Shen Yan et al.
NeurIPS 2025poster