ICLR Poster "large language models" Papers

122 papers found • Page 1 of 3

3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds

Hengshuo Chu, Xiang Deng, Qi Lv et al.

ICLR 2025posterarXiv:2502.20041
15
citations

A Closer Look at Machine Unlearning for Large Language Models

Xiaojian Yuan, Tianyu Pang, Chao Du et al.

ICLR 2025posterarXiv:2410.08109
31
citations

ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints

Divij Handa, Pavel Dolin, Shrinidhi Kumbhar et al.

ICLR 2025posterarXiv:2406.04046
7
citations

Ada-K Routing: Boosting the Efficiency of MoE-based LLMs

Zijia Zhao, Longteng Guo, Jie Cheng et al.

ICLR 2025posterarXiv:2410.10456
8
citations

Advancing LLM Reasoning Generalists with Preference Trees

Lifan Yuan, Ganqu Cui, Hanbin Wang et al.

ICLR 2025posterarXiv:2404.02078
179
citations

AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

Ximing Lu, Melanie Sclar, Skyler Hallinan et al.

ICLR 2025posterarXiv:2410.04265
32
citations

AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in Corporate Statements

Adriana-Eufrosina Bora, Pierre-Luc St-Charles, Mirko Bronzi et al.

ICLR 2025posterarXiv:2502.07022
2
citations

A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules

Kairong Luo, Haodong Wen, Shengding Hu et al.

ICLR 2025posterarXiv:2503.12811
13
citations

AnoLLM: Large Language Models for Tabular Anomaly Detection

Che-Ping Tsai, Ganyu Teng, Phillip Wallis et al.

ICLR 2025poster
7
citations

API Pack: A Massive Multi-Programming Language Dataset for API Call Generation

Gavin (Zhen) Guo, Adriana Meza Soria, Wei Sun et al.

ICLR 2025posterarXiv:2402.09615
4
citations

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models

Yan Scholten, Stephan Günnemann, Leo Schwinn

ICLR 2025posterarXiv:2410.03523
15
citations

Ask, and it shall be given: On the Turing completeness of prompting

Ruizhong Qiu, Zhe Xu, Wenxuan Bao et al.

ICLR 2025posterarXiv:2411.01992
5
citations

A Statistical Approach for Controlled Training Data Detection

Zirui Hu, Yingjie Wang, Zheng Zhang et al.

ICLR 2025poster
2
citations

A Training-Free Sub-quadratic Cost Transformer Model Serving Framework with Hierarchically Pruned Attention

Heejun Lee, Geon Park, Youngwan Lee et al.

ICLR 2025posterarXiv:2406.09827
8
citations

AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution

Fengyuan Liu, Nikhil Kandpal, Colin Raffel

ICLR 2025posterarXiv:2411.15102
12
citations

Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?

Yifan Feng, Chengwu Yang, Xingliang Hou et al.

ICLR 2025posterarXiv:2410.10083
10
citations

Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification

Yunzhen Feng, Elvis Dohmatob, Pu Yang et al.

ICLR 2025posterarXiv:2406.07515

Beyond Next Token Prediction: Patch-Level Training for Large Language Models

Chenze Shao, Fandong Meng, Jie Zhou

ICLR 2025posterarXiv:2407.12665
2
citations

Block Verification Accelerates Speculative Decoding

Ziteng Sun, Uri Mendlovic, Yaniv Leviathan et al.

ICLR 2025posterarXiv:2403.10444
18
citations

CAMEx: Curvature-aware Merging of Experts

Dung Viet Nguyen, Minh Nguyen, Luc Nguyen et al.

ICLR 2025posterarXiv:2502.18821
6
citations

Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?

Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.

ICLR 2025posterarXiv:2403.06833
45
citations

Can LLMs Understand Time Series Anomalies?

Zihao Zhou, Rose Yu

ICLR 2025posterarXiv:2410.05440
32
citations

Catastrophic Failure of LLM Unlearning via Quantization

Zhiwei Zhang, Fali Wang, Xiaomin Li et al.

ICLR 2025posterarXiv:2410.16454
43
citations

Causally Motivated Sycophancy Mitigation for Large Language Models

Haoxi Li, Xueyang Tang, Jie ZHANG et al.

ICLR 2025poster
8
citations

Causal Reasoning and Large Language Models: Opening a New Frontier for Causality

Chenhao Tan, Robert Ness, Amit Sharma et al.

ICLR 2025posterarXiv:2305.00050
390
citations

CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models

Song Wang, Peng Wang, Tong Zhou et al.

ICLR 2025posterarXiv:2407.02408
13
citations

Certifying Counterfactual Bias in LLMs

Isha Chaudhary, Qian Hu, Manoj Kumar et al.

ICLR 2025posterarXiv:2405.18780
4
citations

CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate Selection in Text-to-SQL

Mohammadreza Pourreza, Hailong Li, Ruoxi Sun et al.

ICLR 2025posterarXiv:2410.01943
116
citations

ChemAgent: Self-updating Memories in Large Language Models Improves Chemical Reasoning

Xiangru Tang, Tianyu Hu, Muyang Ye et al.

ICLR 2025poster
4
citations

ClimaQA: An Automated Evaluation Framework for Climate Question Answering Models

Veeramakali Vignesh Manivannan, Yasaman Jafari, Srikar Eranky et al.

ICLR 2025posterarXiv:2410.16701
3
citations

CofCA: A STEP-WISE Counterfactual Multi-hop QA benchmark

Jian Wu, Linyi Yang, Zhen Wang et al.

ICLR 2025posterarXiv:2402.11924
14
citations

Confidence Elicitation: A New Attack Vector for Large Language Models

Brian Formento, Chuan Sheng Foo, See-Kiong Ng

ICLR 2025posterarXiv:2502.04643
2
citations

Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance

Sachin Goyal, Christina Baek, Zico Kolter et al.

ICLR 2025poster
9
citations

CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery

Xiaoshuai Song, Muxi Diao, Guanting Dong et al.

ICLR 2025posterarXiv:2406.08587
27
citations

DataGen: Unified Synthetic Dataset Generation via Large Language Models

Yue Huang, Siyuan Wu, Chujie Gao et al.

ICLR 2025posterarXiv:2406.18966
21
citations

DataMan: Data Manager for Pre-training Large Language Models

Ru Peng, Kexin Yang, Yawen Zeng et al.

ICLR 2025posterarXiv:2502.19363
8
citations

DenseGrounding: Improving Dense Language-Vision Semantics for Ego-centric 3D Visual Grounding

Henry Zheng, Hao Shi, Qihang Peng et al.

ICLR 2025posterarXiv:2505.04965
8
citations

DiscoveryBench: Towards Data-Driven Discovery with Large Language Models

Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal et al.

ICLR 2025posterarXiv:2407.01725
36
citations

Do as We Do, Not as You Think: the Conformity of Large Language Models

Zhiyuan Weng, Guikun Chen, Wenguan Wang

ICLR 2025posterarXiv:2501.13381
18
citations

Do LLMs estimate uncertainty well in instruction-following?

Juyeon Heo, Miao Xiong, Christina Heinze-Deml et al.

ICLR 2025posterarXiv:2410.14582
13
citations

DSBench: How Far Are Data Science Agents from Becoming Data Science Experts?

Liqiang Jing, Zhehui Huang, Xiaoyang Wang et al.

ICLR 2025posterarXiv:2409.07703
62
citations

Durable Quantization Conditioned Misalignment Attack on Large Language Models

Peiran Dong, Haowei Li, Song Guo

ICLR 2025poster
1
citations

EFFICIENT JAILBREAK ATTACK SEQUENCES ON LARGE LANGUAGE MODELS VIA MULTI-ARMED BANDIT-BASED CONTEXT SWITCHING

Aditya Ramesh, Shivam Bhardwaj, Aditya Saibewar et al.

ICLR 2025poster
3
citations

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025posterarXiv:2410.09343
6
citations

Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

Rui Ye, Jingyi Chai, Xiangrui Liu et al.

ICLR 2025posterarXiv:2406.10630
18
citations

Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models

Jingcheng Deng, Zihao Wei, Liang Pang et al.

ICLR 2025posterarXiv:2405.15349
6
citations

Fine-tuning can Help Detect Pretraining Data from Large Language Models

Hengxiang Zhang, Songxin Zhang, Bingyi Jing et al.

ICLR 2025posterarXiv:2410.10880
4
citations

FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Xunhao Lai, Jianqiao Lu, Yao Luo et al.

ICLR 2025posterarXiv:2502.20766
51
citations

ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities

Ezra Karger, Houtan Bastani, Chen Yueh-Han et al.

ICLR 2025posterarXiv:2409.19839
31
citations

From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data

Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee et al.

ICLR 2025posterarXiv:2406.19292
19
citations
← PreviousNext →