"large language models" Papers

986 papers found • Page 14 of 20

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

Sungmin Cha, Sungjun Cho, Dasol Hwang et al.

ICLR 2025arXiv:2408.06621
22
citations

Towards Robust Knowledge Unlearning: An Adversarial Framework for Assessing and Improving Unlearning Robustness in Large Language Models

Hongbang Yuan, Zhuoran Jin, Pengfei Cao et al.

AAAI 2025paperarXiv:2408.10682
25
citations

Towards Trustworthy Knowledge Graph Reasoning: An Uncertainty Aware Perspective

Bo Ni, Yu Wang, Lu Cheng et al.

AAAI 2025paperarXiv:2410.08985
12
citations

Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons

Jianhui Chen, Xiaozhi Wang, Zijun Yao et al.

NEURIPS 2025arXiv:2406.14144
24
citations

Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments

Marharyta Domnich, Julius Välja, Rasmus Moorits Veski et al.

AAAI 2025paperarXiv:2410.21131
8
citations

Toward Understanding In-context vs. In-weight Learning

Bryan Chan, Xinyi Chen, Andras Gyorgy et al.

ICLR 2025arXiv:2410.23042
15
citations

Training-Free Activation Sparsity in Large Language Models

James Liu, Pragaash Ponnusamy, Tianle Cai et al.

ICLR 2025arXiv:2408.14690
39
citations

Training-Free Bayesianization for Low-Rank Adapters of Large Language Models

Haizhou Shi, Yibin Wang, Ligong Han et al.

NEURIPS 2025arXiv:2412.05723
3
citations

Training Large Language Models for Retrieval-Augmented Question Answering through Backtracking Correction

Huawen Feng, ZekunYao, Junhao Zheng et al.

ICLR 2025
1
citations

TrajAgent: An LLM-Agent Framework for Trajectory Modeling via Large-and-Small Model Collaboration

Yuwei Du, Jie Feng, Jie Zhao et al.

NEURIPS 2025arXiv:2410.20445
4
citations

Trajectory-LLM: A Language-based Data Generator for Trajectory Prediction in Autonomous Driving

Kairui Yang, Zihao Guo, Gengjie Lin et al.

ICLR 2025

Transforming Generic Coder LLMs to Effective Binary Code Embedding Models for Similarity Detection

Litao Li, Leo Song, Steven Ding et al.

NEURIPS 2025

Traversal Verification for Speculative Tree Decoding

Yepeng Weng, Qiao Hu, Xujie Chen et al.

NEURIPS 2025arXiv:2505.12398
3
citations

Tree of Preferences for Diversified Recommendation

Hanyang Yuan, Ning Tang, Tongya Zheng et al.

NEURIPS 2025arXiv:2601.02386

TreeSynth: Synthesizing Diverse Data from Scratch via Tree-Guided Subspace Partitioning

Sheng Wang, Pengan CHEN, Jingqi Zhou et al.

NEURIPS 2025spotlightarXiv:2503.17195
1
citations

Triples as the Key: Structuring Makes Decomposition and Verification Easier in LLM-based TableQA

Zhen Yang, Ziwei Du, Minghan Zhang et al.

ICLR 2025
7
citations

Triplets Better Than Pairs: Towards Stable and Effective Self-Play Fine-Tuning for LLMs

Yibo Wang, Hai-Long Sun, Guangda Huzhang et al.

NEURIPS 2025arXiv:2601.08198
5
citations

Trust, But Verify: A Self-Verification Approach to Reinforcement Learning with Verifiable Rewards

Xiaoyuan Liu, Tian Liang, Zhiwei He et al.

NEURIPS 2025arXiv:2505.13445
18
citations

Truth over Tricks: Measuring and Mitigating Shortcut Learning in Misinformation Detection

Herun Wan, Jiaying Wu, Minnan Luo et al.

NEURIPS 2025arXiv:2506.02350
6
citations

TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks

Xiang Meng, Mehdi Makni, Rahul Mazumder

NEURIPS 2025

T-SHIRT: Token-Selective Hierarchical Data Selection for Instruction Tuning

Yanjun Fu, Faisal Hamman, Sanghamitra Dutta

NEURIPS 2025arXiv:2506.01317
7
citations

TTRL: Test-Time Reinforcement Learning

Yuxin Zuo, Kaiyan Zhang, Li Sheng et al.

NEURIPS 2025arXiv:2504.16084
129
citations

Týr-the-Pruner: Structural Pruning LLMs via Global Sparsity Distribution Optimization

Guanchen Li, Yixing Xu, Zeping Li et al.

NEURIPS 2025arXiv:2503.09657
6
citations

UGMathBench: A Diverse and Dynamic Benchmark for Undergraduate-Level Mathematical Reasoning with Large Language Models

Xin Xu, Jiaxin ZHANG, Tianhao Chen et al.

ICLR 2025arXiv:2501.13766
14
citations

Understanding and Enhancing the Transferability of Jailbreaking Attacks

Runqi Lin, Bo Han, Fengwang Li et al.

ICLR 2025arXiv:2502.03052
18
citations

Understanding Emotional Body Expressions via Large Language Models

Haifeng Lu, Jiuyi Chen, Feng Liang et al.

AAAI 2025paperarXiv:2412.12581
11
citations

Understanding Parametric and Contextual Knowledge Reconciliation within Large Language Models

Jun Zhao, Yongzhuo Yang, Xiang Hu et al.

NEURIPS 2025spotlight

UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models

Qizhou Chen, Dakan Wang, Taolin Zhang et al.

NEURIPS 2025arXiv:2505.12345
4
citations

Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models

Siwei Zhang, Yun Xiong, Yateng Tang et al.

NEURIPS 2025oralarXiv:2503.14411
3
citations

Uni-LoRA: One Vector is All You Need

Kaiyang Li, Shaobo Han, Qing Su et al.

NEURIPS 2025spotlightarXiv:2506.00799
3
citations

UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing

Yiheng Li, RuiBing Hou, Hong Chang et al.

CVPR 2025highlightarXiv:2411.16781
18
citations

Universal Cross-Tokenizer Distillation via Approximate Likelihood Matching

Benjamin Minixhofer, Ivan Vulić, Edoardo Maria Ponti

NEURIPS 2025arXiv:2503.20083
15
citations

Unlearned but Not Forgotten: Data Extraction after Exact Unlearning in LLM

Xiaoyu Wu, Yifei Pang, Terrance Liu et al.

NEURIPS 2025arXiv:2505.24379
3
citations

Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning

Tianci Liu, Ruirui Li, Yunzhe Qi et al.

ICLR 2025arXiv:2503.00306
12
citations

Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning

Gangwei Jiang, caigao jiang, Zhaoyi Li et al.

ICLR 2025arXiv:2502.11019
8
citations

Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment

Yuze Zhao, Tianyun Ji, Wenjun Feng et al.

ICLR 2025arXiv:2502.13170
6
citations

U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models

Tung-Yu Wu, Melody Lo

ICLR 2025arXiv:2410.01692
5
citations

VADTree: Explainable Training-Free Video Anomaly Detection via Hierarchical Granularity-Aware Tree

Wenlong Li, Yifei Xu, Yuan Rao et al.

NEURIPS 2025oralarXiv:2510.22693
1
citations

Valid Inference with Imperfect Synthetic Data

Yewon Byun, Shantanu Gupta, Zachary Lipton et al.

NEURIPS 2025arXiv:2508.06635
1
citations

VALLR: Visual ASR Language Model for Lip Reading

Marshall Thomas, Edward Fish, Richard Bowden

ICCV 2025arXiv:2503.21408
6
citations

Variational Uncertainty Decomposition for In-Context Learning

I. Shavindra Jayasekera, Jacob Si, Filippo Valdettaro et al.

NEURIPS 2025arXiv:2509.02327
2
citations

VERA: Variational Inference Framework for Jailbreaking Large Language Models

Anamika Lochab, Lu Yan, Patrick Pynadath et al.

NEURIPS 2025arXiv:2506.22666
1
citations

Video Summarization with Large Language Models

Min Jung Lee, Dayoung Gong, Minsu Cho

CVPR 2025arXiv:2504.11199
11
citations

ViFactCheck: A New Benchmark Dataset and Methods for Multi-Domain News Fact-Checking In Vietnamese

Tran Thai Hoa, Tran Quang Duy, Khanh Quoc Tran et al.

AAAI 2025paperarXiv:2412.15308
2
citations

ViLLa: Video Reasoning Segmentation with Large Language Model

rongkun Zheng, Lu Qi, Xi Chen et al.

ICCV 2025arXiv:2407.14500
17
citations

VinePPO: Refining Credit Assignment in RL Training of LLMs

Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance et al.

ICML 2025arXiv:2410.01679
56
citations

Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data

Zi Liang, Qingqing Ye, Xuan Liu et al.

NEURIPS 2025spotlight

VT-FSL: Bridging Vision and Text with LLMs for Few-Shot Learning

Wenhao Li, Qiangchang Wang, Xianjing Meng et al.

NEURIPS 2025arXiv:2509.25033
4
citations

Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors

Fan Nie, Lan Feng, Haotian Ye et al.

COLM 2025paperarXiv:2504.04785
11
citations

Weakly Supervised Video Scene Graph Generation via Natural Language Supervision

Kibum Kim, Kanghoon Yoon, Yeonjun In et al.

ICLR 2025oralarXiv:2502.15370
2
citations