"large language models" Papers
472 papers found • Page 4 of 10
PANORAMA: A Dataset and Benchmarks Capturing Decision Trails and Rationales in Patent Examination
Hyunseung Lim, Sooyohn Nam, Sungmin Na et al.
Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization
Zhanfeng Mo, Long-Kai Huang, Sinno Jialin Pan
ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation
Pengcheng Huang, Zhenghao Liu, Yukun Yan et al.
PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks
Matthew Chang, Gunjan Chhablani, Alexander Clegg et al.
Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos
Weifeng Lin, Xinyu Wei, Ruichuan An et al.
PlanU: Large Language Model Reasoning through Planning under Uncertainty
Ziwei Deng, Mian Deng, Chenjing Liang et al.
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
Zhijian Zhuo, Ya Wang, Yutao Zeng et al.
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches
Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun et al.
Preference-driven Knowledge Distillation for Few-shot Node Classification
Xing Wei, Chunchun Chen, Rui Fan et al.
Probabilistic Reasoning with LLMs for Privacy Risk Estimation
Jonathan Zheng, Alan Ritter, Sauvik Das et al.
Probabilistic Token Alignment for Large Language Model Fusion
Runjia Zeng, James Liang, Cheng Han et al.
Progress Reward Model for Reinforcement Learning via Large Language Models
Xiuhui Zhang, Ning Gao, Xingyu Jiang et al.
Prompting as Scientific Inquiry
Ari Holtzman, Chenhao Tan
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He, Zhaopeng Tu, Xing Wang et al.
Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning
Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.
Reasoning Models Better Express Their Confidence
Dongkeun Yoon, Seungone Kim, Sohee Yang et al.
Re-evaluating Open-ended Evaluation of Large Language Models
Si-Qi Liu, Ian Gemp, Luke Marris et al.
Refine Knowledge of Large Language Models via Adaptive Contrastive Learning
Yinghui Li, Haojing Huang, Jiayi Kuang et al.
Reliable Decision‑Making via Calibration‑Oriented Retrieval‑Augmented Generation
Chaeyun Jang, Deukhwan Cho, Seanie Lee et al.
ReMA: Learning to Meta-Think for LLMs with Multi-agent Reinforcement Learning
Ziyu Wan, Yunxiang Li, Xiaoyu Wen et al.
Representation Consistency for Accurate and Coherent LLM Answer Aggregation
Junqi Jiang, Tom Bewley, Salim I. Amoukou et al.
RESAnything: Attribute Prompting for Arbitrary Referring Segmentation
Ruiqi Wang, Hao Zhang
Rethinking Residual Distribution in Locate-then-Edit Model Editing
Xiaopeng Li, Shangwen Wang, Shasha Li et al.
Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector
Xiao Guo, Xiufeng Song, Yue Zhang et al.
Revising and Falsifying Sparse Autoencoder Feature Explanations
George Ma, Samuel Pfrommer, Somayeh Sojoudi
RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen et al.
RoboTron-Nav: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction
Yufeng Zhong, Chengjian Feng, Feng yan et al.
Robust Hallucination Detection in LLMs via Adaptive Token Selection
Mengjia Niu, Hamed Haddadi, Guansong Pang
ROUTE: Robust Multitask Tuning and Collaboration for Text-to-SQL
Yang Qin, Chao Chen, Zhihang Fu et al.
RSAVQ: Riemannian Sensitivity-Aware Vector Quantization for Large Language Models
Zukang Xu, Xing Hu, Qiang Wu et al.
rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset
Yifei Liu, Li Lyna Zhang, Yi Zhu et al.
Scaling and context steer LLMs along the same computational path as the human brain
Joséphine Raugel, Jérémy Rapin, Stéphane d'Ascoli et al.
Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity in LLMs
Jun Wang, Liang Ding, Shuai Wang et al.
Self Iterative Label Refinement via Robust Unlabeled Learning
Hikaru Asano, Tadashi Kozuno, Yukino Baba
Self-Updatable Large Language Models by Integrating Context into Model Parameters
Yu Wang, Xinshuang Liu, Xiusi Chen et al.
Self-Verification Provably Prevents Model Collapse in Recursive Synthetic Training
Shi Fu, Yingjie Wang, Yuzhu Chen et al.
SeRL: Self-play Reinforcement Learning for Large Language Models with Limited Data
Wenkai Fang, Shunyu Liu, Yang Zhou et al.
ShiQ: Bringing back Bellman to LLMs
Pierre Clavier, Nathan Grinsztajn, Raphaël Avalos et al.
SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters
Teng Xiao, Yige Yuan, Zhengyu Chen et al.
SIMS: Simulating Stylized Human-Scene Interactions with Retrieval-Augmented Script Generation
Wenjia Wang, Liang Pan, Zhiyang Dou et al.
Simulating Society Requires Simulating Thought
Chance Jiajie Li, Jiayi Wu, Zhenze MO et al.
S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning
Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.
SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal
Tinghao Xie, Xiangyu Qi, Yi Zeng et al.
SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language Model
Yue Zhang, Zhiyang Xu, Ying Shen et al.
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.
SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking
Xingrun Xing, Boyan Gao, Zheng Liu et al.
SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs
Ruyue Liu, Rong Yin, Xiangzhen Bo et al.
SteerConf: Steering LLMs for Confidence Elicitation
Ziang Zhou, Tianyuan Jin, Jieming Shi et al.
Stop DDoS Attacking the Research Community with AI-Generated Survey Papers
Jianghao Lin, Rong Shan, Jiachen Zhu et al.
Streaming Attention Approximation via Discrepancy Theory
Ekaterina Kochetkova, Kshiteej Jitesh Sheth, Insu Han et al.