"large language models" Papers

472 papers found • Page 5 of 10

StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization

Zhuoqun Li, Xuanang Chen, Haiyang Yu et al.

ICLR 2025posterarXiv:2410.08815
46
citations

SWE-bench Goes Live!

Linghao Zhang, Shilin He, Chaoyun Zhang et al.

NeurIPS 2025posterarXiv:2505.23419
22
citations

System Prompt Optimization with Meta-Learning

Yumin Choi, Jinheon Baek, Sung Ju Hwang

NeurIPS 2025posterarXiv:2505.09666
4
citations

TCM-Ladder: A Benchmark for Multimodal Question Answering on Traditional Chinese Medicine

Jiacheng Xie, Yang Yu, Ziyang Zhang et al.

NeurIPS 2025posterarXiv:2505.24063
2
citations

ThinkBench: Dynamic Out-of-Distribution Evaluation for Robust LLM Reasoning

Shulin Huang, Linyi Yang, Yan Song et al.

NeurIPS 2025posterarXiv:2502.16268
14
citations

Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models

Chengyu Du, Jinyi Han, Yizhou Ying et al.

ICLR 2025posterarXiv:2410.13413
5
citations

Timely Clinical Diagnosis through Active Test Selection

Silas Ruhrberg Estévez, Nicolás Astorga, Mihaela van der Schaar

NeurIPS 2025posterarXiv:2510.18988

To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning

Zayne Sprague, Fangcong Yin, Juan Rodriguez et al.

ICLR 2025posterarXiv:2409.12183
239
citations

Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons

Jianhui Chen, Xiaozhi Wang, Zijun Yao et al.

NeurIPS 2025posterarXiv:2406.14144
24
citations

Training-Free Activation Sparsity in Large Language Models

James Liu, Pragaash Ponnusamy, Tianle Cai et al.

ICLR 2025posterarXiv:2408.14690
37
citations

Training-Free Bayesianization for Low-Rank Adapters of Large Language Models

Haizhou Shi, Yibin Wang, Ligong Han et al.

NeurIPS 2025posterarXiv:2412.05723
2
citations

TrajAgent: An LLM-Agent Framework for Trajectory Modeling via Large-and-Small Model Collaboration

Yuwei Du, Jie Feng, Jie Zhao et al.

NeurIPS 2025posterarXiv:2410.20445
3
citations

Trajectory-LLM: A Language-based Data Generator for Trajectory Prediction in Autonomous Driving

Kairui Yang, Zihao Guo, Gengjie Lin et al.

ICLR 2025poster

Tree of Preferences for Diversified Recommendation

Hanyang Yuan, Ning Tang, Tongya Zheng et al.

NeurIPS 2025posterarXiv:2601.02386

Triplets Better Than Pairs: Towards Stable and Effective Self-Play Fine-Tuning for LLMs

Yibo Wang, Hai-Long Sun, Guangda Huzhang et al.

NeurIPS 2025posterarXiv:2601.08198
4
citations

Truth over Tricks: Measuring and Mitigating Shortcut Learning in Misinformation Detection

Herun Wan, Jiaying Wu, Minnan Luo et al.

NeurIPS 2025posterarXiv:2506.02350
6
citations

TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks

Xiang Meng, Mehdi Makni, Rahul Mazumder

NeurIPS 2025poster

TTRL: Test-Time Reinforcement Learning

Yuxin Zuo, Kaiyan Zhang, Li Sheng et al.

NeurIPS 2025posterarXiv:2504.16084
122
citations

Týr-the-Pruner: Structural Pruning LLMs via Global Sparsity Distribution Optimization

Guanchen Li, Yixing Xu, Zeping Li et al.

NeurIPS 2025posterarXiv:2503.09657
6
citations

UGMathBench: A Diverse and Dynamic Benchmark for Undergraduate-Level Mathematical Reasoning with Large Language Models

Xin Xu, Jiaxin ZHANG, Tianhao Chen et al.

ICLR 2025posterarXiv:2501.13766
13
citations

Uni-LoRA: One Vector is All You Need

Kaiyang Li, Shaobo Han, Qing Su et al.

NeurIPS 2025spotlightarXiv:2506.00799
2
citations

Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning

Tianci Liu, Ruirui Li, Yunzhe Qi et al.

ICLR 2025posterarXiv:2503.00306
12
citations

Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning

Gangwei Jiang, caigao jiang, Zhaoyi Li et al.

ICLR 2025posterarXiv:2502.11019
8
citations

VADTree: Explainable Training-Free Video Anomaly Detection via Hierarchical Granularity-Aware Tree

Wenlong Li, Yifei Xu, Yuan Rao et al.

NeurIPS 2025oralarXiv:2510.22693
1
citations

Valid Inference with Imperfect Synthetic Data

Yewon Byun, Shantanu Gupta, Zachary Lipton et al.

NeurIPS 2025posterarXiv:2508.06635

Video Summarization with Large Language Models

Min Jung Lee, Dayoung Gong, Minsu Cho

CVPR 2025posterarXiv:2504.11199
8
citations

Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data

Zi Liang, Qingqing Ye, Xuan Liu et al.

NeurIPS 2025spotlight

Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation

Hyungjoo Chae, Namyoung Kim, Kai Ong et al.

ICLR 2025posterarXiv:2410.13232
59
citations

What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers

Pulkit Gopalani, Wei Hu

NeurIPS 2025posterarXiv:2506.13688
1
citations

Why Does the Effective Context Length of LLMs Fall Short?

Chenxin An, Jun Zhang, Ming Zhong et al.

ICLR 2025posterarXiv:2410.18745

Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search

Yuichi Inoue, Kou Misaki, Yuki Imajuku et al.

NeurIPS 2025spotlightarXiv:2503.04412
18
citations

WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild

Bill Yuchen Lin, Yuntian Deng, Khyathi Chandu et al.

ICLR 2025posterarXiv:2406.04770
142
citations

WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

Haipeng Luo, Qingfeng Sun, Can Xu et al.

ICLR 2025posterarXiv:2308.09583
637
citations

WritingBench: A Comprehensive Benchmark for Generative Writing

Yuning Wu, Jiahao Mei, Ming Yan et al.

NeurIPS 2025posterarXiv:2503.05244
41
citations

$S^2$IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting

Zijie Pan, Yushan Jiang, Sahil Garg et al.

ICML 2024oral

Accelerated Speculative Sampling Based on Tree Monte Carlo

Zhengmian Hu, Heng Huang

ICML 2024poster

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin, Xudong Ma, Xingyu Zheng et al.

ICML 2024poster

A Closer Look at the Limitations of Instruction Tuning

Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar et al.

ICML 2024poster

A Comprehensive Analysis of the Effectiveness of Large Language Models as Automatic Dialogue Evaluators

Chen Zhang, L. F. D’Haro, Yiming Chen et al.

AAAI 2024paperarXiv:2312.15407
49
citations

Active Preference Learning for Large Language Models

William Muldrew, Peter Hayes, Mingtian Zhang et al.

ICML 2024poster

Adaptive Text Watermark for Large Language Models

Yepeng Liu, Yuheng Bu

ICML 2024poster

Advancing Spatial Reasoning in Large Language Models: An In-Depth Evaluation and Enhancement Using the StepGame Benchmark

Fangjun Li, David C. Hogg, Anthony G. Cohn

AAAI 2024paperarXiv:2401.03991
51
citations

Agent Instructs Large Language Models to be General Zero-Shot Reasoners

Nicholas Crispino, Kyle Montgomery, Fankun Zeng et al.

ICML 2024poster

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar et al.

ICML 2024poster

AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training

Ziyu Wan, Xidong Feng, Muning Wen et al.

ICML 2024poster

A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?

Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta et al.

ICML 2024poster

Assessing Large Language Models on Climate Information

Jannis Bulian, Mike Schäfer, Afra Amini et al.

ICML 2024poster

Asynchronous Large Language Model Enhanced Planner for Autonomous Driving

Yuan Chen, Zi-han Ding, Ziqin Wang et al.

ECCV 2024posterarXiv:2406.14556
33
citations

A Tale of Tails: Model Collapse as a Change of Scaling Laws

Elvis Dohmatob, Yunzhen Feng, Pu Yang et al.

ICML 2024poster

Autoformalizing Euclidean Geometry

Logan Murphy, Kaiyu Yang, Jialiang Sun et al.

ICML 2024poster