ICLR "large language models" Papers

113 papers found • Page 2 of 3

Generative Monoculture in Large Language Models

Fan Wu, Emily Black, Varun Chandrasekaran

ICLR 2025posterarXiv:2407.02209
10
citations

GOFA: A Generative One-For-All Model for Joint Graph Language Modeling

Lecheng Kong, Jiarui Feng, Hao Liu et al.

ICLR 2025posterarXiv:2407.09709
28
citations

HaDeMiF: Hallucination Detection and Mitigation in Large Language Models

Xiaoling Zhou, Mingjie Zhang, Zhemg Lee et al.

ICLR 2025poster
9
citations

HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis

Yuto Nishimura, Takumi Hirose, Masanari Ohi et al.

ICLR 2025posterarXiv:2410.04380
5
citations

HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models

Seanie Lee, Haebin Seong, Dong Bok Lee et al.

ICLR 2025posterarXiv:2410.01524
13
citations

Herald: A Natural Language Annotated Lean 4 Dataset

Guoxiong Gao, Yutong Wang, Jiedong Jiang et al.

ICLR 2025posterarXiv:2410.10878
28
citations

How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension

Xinnan Dai, Haohao QU, Yifei Shen et al.

ICLR 2025posterarXiv:2410.05298
20
citations

Human Simulacra: Benchmarking the Personification of Large Language Models

Qiujie Xie, Qiming Feng, Tianqi Zhang et al.

ICLR 2025posterarXiv:2402.18180
8
citations

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025posterarXiv:2405.14660
8
citations

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models

Xiaojun Jia, Tianyu Pang, Chao Du et al.

ICLR 2025posterarXiv:2405.21018
74
citations

Improving Reasoning Performance in Large Language Models via Representation Engineering

Bertram Højer, Oliver Jarvis, Stefan Heinrich

ICLR 2025posterarXiv:2504.19483
15
citations

Improving Text-to-Image Consistency via Automatic Prompt Optimization

Melissa Hall, Michal Drozdzal, Oscar Mañas et al.

ICLR 2025poster

Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for LLM Problem-Solving

Yangzhen Wu, Zhiqing Sun, Shanda Li et al.

ICLR 2025poster
146
citations

Integrative Decoding: Improving Factuality via Implicit Self-consistency

Yi Cheng, Xiao Liang, Yeyun Gong et al.

ICLR 2025posterarXiv:2410.01556
6
citations

Jailbreaking as a Reward Misspecification Problem

Zhihui Xie, Jiahui Gao, Lei Li et al.

ICLR 2025posterarXiv:2406.14393
9
citations

Language Guided Skill Discovery

Seungeun Rho, Laura Smith, Tianyu Li et al.

ICLR 2025posterarXiv:2406.06615
14
citations

Large Language Models Assume People are More Rational than We Really are

Ryan Liu, Jiayi Geng, Joshua Peterson et al.

ICLR 2025posterarXiv:2406.17055
37
citations

Large (Vision) Language Models are Unsupervised In-Context Learners

Artyom Gadetsky, Andrei Atanov, Yulun Jiang et al.

ICLR 2025posterarXiv:2504.02349
3
citations

LASER: A Neuro-Symbolic Framework for Learning Spatio-Temporal Scene Graphs with Weak Supervision

Jiani Huang, Ziyang Li, Mayur Naik et al.

ICLR 2025oral

Layerwise Recurrent Router for Mixture-of-Experts

Zihan Qiu, Zeyu Huang, Shuang Cheng et al.

ICLR 2025posterarXiv:2408.06793
7
citations

Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning

Seanie Lee, Minsu Kim, Lynn Cherif et al.

ICLR 2025posterarXiv:2405.18540
44
citations

LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

Naman Jain, Han, Alex Gu et al.

ICLR 2025posterarXiv:2403.07974
1016
citations

LLM-SR: Scientific Equation Discovery via Programming with Large Language Models

Parshin Shojaee, Kazem Meidani, Shashank Gupta et al.

ICLR 2025posterarXiv:2404.18400
55
citations

Logical Consistency of Large Language Models in Fact-Checking

Bishwamittra Ghosh, Sarah Hasan, Naheed Anjum Arafat et al.

ICLR 2025posterarXiv:2412.16100
15
citations

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025posterarXiv:2409.16167
33
citations

MindSearch: Mimicking Human Minds Elicits Deep AI Searcher

Zehui Chen, Kuikun Liu, Qiuchen Wang et al.

ICLR 2025posterarXiv:2407.20183
53
citations

Min-K%++: Improved Baseline for Pre-Training Data Detection from Large Language Models

Jingyang Zhang, Jingwei Sun, Eric Yeats et al.

ICLR 2025poster
24
citations

Mixture Compressor for Mixture-of-Experts LLMs Gains More

Wei Huang, Yue Liao, Jianhui Liu et al.

ICLR 2025posterarXiv:2410.06270
22
citations

More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness

Aaron J. Li, Satyapriya Krishna, Hima Lakkaraju

ICLR 2025posterarXiv:2404.18870
10
citations

On Large Language Model Continual Unlearning

Chongyang Gao, Lixu Wang, Kaize Ding et al.

ICLR 2025posterarXiv:2407.10223
26
citations

On the Role of Attention Heads in Large Language Model Safety

Zhenhong Zhou, Haiyang Yu, Xinghua Zhang et al.

ICLR 2025posterarXiv:2410.13708
40
citations

OSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents

Zhaolin Hu, Yixiao Zhou, Zhongan Wang et al.

ICLR 2025poster
6
citations

Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization

Zhanfeng Mo, Long-Kai Huang, Sinno Jialin Pan

ICLR 2025poster

PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks

Matthew Chang, Gunjan Chhablani, Alexander Clegg et al.

ICLR 2025oralarXiv:2411.00081
46
citations

Plug, Play, and Generalize: Length Extrapolation with Pointer-Augmented Neural Memory

Svetha Venkatesh, Kien Do, Hung Le et al.

ICLR 2025poster

Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models

Zhijian Zhuo, Ya Wang, Yutao Zeng et al.

ICLR 2025posterarXiv:2411.03884
5
citations

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches

Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun et al.

ICLR 2025posterarXiv:2410.10870
3
citations

Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models

Laura Ruis, Maximilian Mozes, Juhan Bae et al.

ICLR 2025posterarXiv:2411.12580
24
citations

RaSA: Rank-Sharing Low-Rank Adaptation

Zhiwei He, Zhaopeng Tu, Xing Wang et al.

ICLR 2025posterarXiv:2503.12576
4
citations

Reasoning of Large Language Models over Knowledge Graphs with Super-Relations

Song Wang, Junhong Lin, Xiaojie Guo et al.

ICLR 2025posterarXiv:2503.22166
17
citations

Re-evaluating Open-ended Evaluation of Large Language Models

Si-Qi Liu, Ian Gemp, Luke Marris et al.

ICLR 2025posterarXiv:2502.20170
5
citations

Refine Knowledge of Large Language Models via Adaptive Contrastive Learning

Yinghui Li, Haojing Huang, Jiayi Kuang et al.

ICLR 2025posterarXiv:2502.07184
14
citations

ROUTE: Robust Multitask Tuning and Collaboration for Text-to-SQL

Yang Qin, Chao Chen, Zhihang Fu et al.

ICLR 2025posterarXiv:2412.10138
8
citations

Self-Updatable Large Language Models by Integrating Context into Model Parameters

Yu Wang, Xinshuang Liu, Xiusi Chen et al.

ICLR 2025posterarXiv:2410.00487
5
citations

SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters

Teng Xiao, Yige Yuan, Zhengyu Chen et al.

ICLR 2025posterarXiv:2502.00883
23
citations

SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal

Tinghao Xie, Xiangyu Qi, Yi Zeng et al.

ICLR 2025posterarXiv:2406.14598
141
citations

SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language Model

Yue Zhang, Zhiyang Xu, Ying Shen et al.

ICLR 2025posterarXiv:2410.03878
19
citations

Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting

Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.

ICLR 2025posterarXiv:2407.08223
75
citations

SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking

Xingrun Xing, Boyan Gao, Zheng Liu et al.

ICLR 2025posterarXiv:2407.04752
21
citations

StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization

Zhuoqun Li, Xuanang Chen, Haiyang Yu et al.

ICLR 2025posterarXiv:2410.08815
46
citations