2025 Oral "large language models" Papers

25 papers found

AutoDiscovery: Open-ended Scientific Discovery via Bayesian Surprise

Dhruv Agarwal, Bodhisattwa Prasad Majumder, Reece Adamson et al.

NeurIPS 2025oralarXiv:2507.00310
3
citations

Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation

Jianyuan Guo, Peike Li, Trevor Cohn

NeurIPS 2025oralarXiv:2505.15438
3
citations

Concept Incongruence: An Exploration of Time and Death in Role Playing

Xiaoyan Bai, Ike Peng, Aditya Singh et al.

NeurIPS 2025oralarXiv:2505.14905
1
citations

DanmakuTPPBench: A Multi-modal Benchmark for Temporal Point Process Modeling and Understanding

Yue Jiang, Jichu Li, Yang Liu et al.

NeurIPS 2025oralarXiv:2505.18411
3
citations

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?

Yang Yue, Zhiqi Chen, Rui Lu et al.

NeurIPS 2025oralarXiv:2504.13837
483
citations

Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective

Ruichen Shao, Bei Li, Gangao Liu et al.

ICLR 2025oralarXiv:2502.14340
7
citations

Episodic Memories Generation and Evaluation Benchmark for Large Language Models

Alexis Huet, Zied Houidi, Dario Rossi

ICLR 2025oralarXiv:2501.13121
8
citations

Generating Computational Cognitive models using Large Language Models

Milena Rmus, Akshay Kumar Jagadish, Marvin Mathony et al.

NeurIPS 2025oralarXiv:2502.00879
3
citations

GnnXemplar: Exemplars to Explanations - Natural Language Rules for Global GNN Interpretability

Burouj Armgaan, Eshan Jain, Harsh Pandey et al.

NeurIPS 2025oralarXiv:2509.18376
2
citations

LASER: A Neuro-Symbolic Framework for Learning Spatio-Temporal Scene Graphs with Weak Supervision

Jiani Huang, Ziyang Li, Mayur Naik et al.

ICLR 2025oral

LayerNavigator: Finding Promising Intervention Layers for Efficient Activation Steering in Large Language Models

Hao Sun, Huailiang Peng, Qiong Dai et al.

NeurIPS 2025oral

Let the Code LLM Edit Itself When You Edit the Code

Zhenyu He, Jun Zhang, Shengjie Luo et al.

ICLR 2025oralarXiv:2407.03157
3
citations

LLM Strategic Reasoning: Agentic Study through Behavioral Game Theory

Jingru Jia, Zehua Yuan, Junhao Pan et al.

NeurIPS 2025oralarXiv:2502.20432
7
citations

MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization

Yougang Lyu, Lingyong Yan, Zihan Wang et al.

ICLR 2025oralarXiv:2410.07672

Many LLMs Are More Utilitarian Than One

Anita Keshmirian, Razan Baltaji, Babak Hemmatian et al.

NeurIPS 2025oralarXiv:2507.00814
2
citations

Memory Mosaics at scale

Jianyu Zhang, Leon Bottou

NeurIPS 2025oralarXiv:2507.03285
3
citations

PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks

Matthew Chang, Gunjan Chhablani, Alexander Clegg et al.

ICLR 2025oralarXiv:2411.00081
46
citations

PICASO: Permutation-Invariant Context Composition with State Space Models

Tian Yu Liu, Alessandro Achille, Matthew Trager et al.

ICLR 2025oralarXiv:2502.17605

Prompting as Scientific Inquiry

Ari Holtzman, Chenhao Tan

NeurIPS 2025oralarXiv:2507.00163

RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility

Haoyu He, Haozheng Luo, Yan Chen et al.

NeurIPS 2025oralarXiv:2509.23115
1
citations

Scaling and context steer LLMs along the same computational path as the human brain

Joséphine Raugel, Jérémy Rapin, Stéphane d'Ascoli et al.

NeurIPS 2025oralarXiv:2512.01591

SLMRec: Distilling Large Language Models into Small for Sequential Recommendation

Wujiang Xu, Qitian Wu, Zujie Liang et al.

ICLR 2025oralarXiv:2405.17890
17
citations

Stop DDoS Attacking the Research Community with AI-Generated Survey Papers

Jianghao Lin, Rong Shan, Jiachen Zhu et al.

NeurIPS 2025oralarXiv:2510.09686

VADTree: Explainable Training-Free Video Anomaly Detection via Hierarchical Granularity-Aware Tree

Wenlong Li, Yifei Xu, Yuan Rao et al.

NeurIPS 2025oralarXiv:2510.22693
1
citations

Weakly Supervised Video Scene Graph Generation via Natural Language Supervision

Kibum Kim, Kanghoon Yoon, Yeonjun In et al.

ICLR 2025oralarXiv:2502.15370
2
citations