ICLR Papers

6,124 papers found • Page 36 of 123

KAA: Kolmogorov-Arnold Attention for Enhancing Attentive Graph Neural Networks

Taoran Fang, Tianhong Gao, Chunping Wang et al.

ICLR 2025posterarXiv:2501.13456

KAN: Kolmogorov–Arnold Networks

Ziming Liu, Yixuan Wang, Sachin Vaidya et al.

ICLR 2025poster

KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models

Fan Wang, Juyong Jiang, Chansung Park et al.

ICLR 2025posterarXiv:2412.06071
5
citations

KBLaM: Knowledge Base augmented Language Model

Xi Wang, Taketomo Isazawa, Liana Mikaelyan et al.

ICLR 2025posterarXiv:2410.10450

Kernel-based Optimally Weighted Conformal Time-Series Prediction

Jonghyeok Lee, Chen Xu, Yao Xie

ICLR 2025poster
4
citations

KGARevion: An AI Agent for Knowledge-Intensive Biomedical QA

Xiaorui Su, Yibo Wang, Shanghua Gao et al.

ICLR 2025posterarXiv:2410.04660
19
citations

K-HALU: Multiple Answer Korean Hallucination Benchmark for Large Language Models

Jaehyung Seo, Heuiseok Lim

ICLR 2025poster

Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks

Michael Matthews, Michael Beukman, Chris Lu et al.

ICLR 2025posterarXiv:2410.23208
20
citations

KinFormer: Generalizable Dynamical Symbolic Regression for Catalytic Organic Reaction Kinetics

Jindou Chen, Jidong Tian, Liang Wu et al.

ICLR 2025poster

KinPFN: Bayesian Approximation of RNA Folding Kinetics using Prior-Data Fitted Networks

Dominik Scheuer, Frederic Runge, Jörg Franke et al.

ICLR 2025poster
2
citations

KiVA: Kid-inspired Visual Analogies for Testing Large Multimodal Models

Eunice Yiu, Maan Qraitem, Anisa Majhi et al.

ICLR 2025posterarXiv:2407.17773
19
citations

KLay: Accelerating Arithmetic Circuits for Neurosymbolic AI

Jaron Maene, Vincent Derkinderen, Pedro Zuidberg Dos Martires

ICLR 2025posterarXiv:2410.11415

kNN Attention Demystified: A Theoretical Exploration for Scalable Transformers

Themistoklis Haris

ICLR 2025poster

Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding

Xin Gu, Yaojie Shen, Chenxi Luo et al.

ICLR 2025oralarXiv:2502.11168
8
citations

Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution

Simiao Li, Yun Zhang, Wei Li et al.

ICLR 2025posterarXiv:2404.02573
4
citations

Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition

Jiyeon Kim, Hyunji Lee, Hyowon Cho et al.

ICLR 2025posterarXiv:2410.01380

Knowledge Graph Finetuning Enhances Knowledge Manipulation in Large Language Models

Hanzhu Chen, Xu Shen, Jie Wang et al.

ICLR 2025poster
8
citations

Knowledge Localization: Mission Not Accomplished? Enter Query Localization!

Yuheng Chen, Pengfei Cao, Yubo Chen et al.

ICLR 2025posterarXiv:2405.14117
11
citations

Kolmogorov-Arnold Transformer

Xingyi Yang, Xinchao Wang

ICLR 2025posterarXiv:2409.10594
88
citations

KooNPro: A Variance-Aware Koopman Probabilistic Model Enhanced by Neural Process for Time Series Forecasting

Ronghua Zheng, Hanru Bai, Weiyang Ding

ICLR 2025oral
2
citations

KOR-Bench: Benchmarking Language Models on Knowledge-Orthogonal Reasoning Tasks

Kaijing Ma, Xeron Du, Yunran Wang et al.

ICLR 2025posterarXiv:2410.06526
54
citations

Kronecker Mask and Interpretive Prompts are Language-Action Video Learners

Jingyi Yang, Zitong YU, Nixiuming et al.

ICLR 2025oralarXiv:2502.03549
3
citations

L3Ms — Lagrange Large Language Models

Guneet Singh Dhillon, Xingjian Shi, Yee Whye Teh et al.

ICLR 2025posterarXiv:2410.21533
1
citations

LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion

Biao Zhang, Peter Wonka

ICLR 2025posterarXiv:2410.01295
11
citations

Lambda-Skip Connections: the architectural component that prevents Rank Collapse

Federico Arangath Joseph, Jerome Sieber, Melanie Zeilinger et al.

ICLR 2025posterarXiv:2410.10609
2
citations

LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement

Zijie Geng, Jie Wang, Ziyan Liu et al.

ICLR 2025poster

LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning

Zhe Li, Weihao Yuan, Yisheng He et al.

ICLR 2025posterarXiv:2410.07093
33
citations

LancBiO: Dynamic Lanczos-aided Bilevel Optimization via Krylov Subspace

Yan Yang, Bin Gao, Ya-xiang Yuan

ICLR 2025posterarXiv:2404.03331
3
citations

Langevin Soft Actor-Critic: Efficient Exploration through Uncertainty-Driven Critic Learning

Haque Ishfaq, Guangyuan Wang, Sami Islam et al.

ICLR 2025posterarXiv:2501.17827

Language Agents Meet Causality -- Bridging LLMs and Causal World Models

John Gkountouras, Matthias Lindemann, Phillip Lippe et al.

ICLR 2025oralarXiv:2410.19923
5
citations

Language-Assisted Feature Transformation for Anomaly Detection

EungGu Yun, Heonjin Ha, Yeongwoo Nam et al.

ICLR 2025posterarXiv:2503.01184
2
citations

Language Guided Skill Discovery

Seungeun Rho, Laura Smith, Tianyu Li et al.

ICLR 2025posterarXiv:2406.06615
15
citations

Language-Image Models with 3D Understanding

Jang Hyun Cho, Boris Ivanovic, Yulong Cao et al.

ICLR 2025posterarXiv:2405.03685
27
citations

Language Imbalance Driven Rewarding for Multilingual Self-improving

Wen Yang, Junhong Wu, Chen Wang et al.

ICLR 2025posterarXiv:2410.08964
23
citations

Language Model Alignment in Multilingual Trolley Problems

Zhijing Jin, Max Kleiman-Weiner, Giorgio Piatti et al.

ICLR 2025oralarXiv:2407.02273

Language Models are Advanced Anonymizers

Robin Staab, Mark Vero, Mislav Balunovic et al.

ICLR 2025posterarXiv:2402.13846
3
citations

Language Models Are Implicitly Continuous

Samuele Marro, Davide Evangelista, X. Huang et al.

ICLR 2025posterarXiv:2504.03933
3
citations

Language Models Learn to Mislead Humans via RLHF

Jiaxin Wen, Ruiqi Zhong, Akbir Khan et al.

ICLR 2025posterarXiv:2409.12822
73
citations

Language Models Need Inductive Biases to Count Inductively

Yingshan Chang, Yonatan Bisk

ICLR 2025posterarXiv:2405.20131
19
citations

Language models scale reliably with over-training and on downstream tasks

Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar et al.

ICLR 2025posterarXiv:2403.08540
77
citations

Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice

Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths

ICLR 2025oralarXiv:2405.19313
8
citations

Language Representations Can be What Recommenders Need: Findings and Potentials

Leheng Sheng, An Zhang, Yi Zhang et al.

ICLR 2025posterarXiv:2407.05441
23
citations

LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding

Doohyuk Jang, Sihwan Park, June Yong Yang et al.

ICLR 2025posterarXiv:2410.03355
30
citations

Laplace Sample Information: Data Informativeness Through a Bayesian Lens

Johannes Kaiser, Kristian Schwethelm, Daniel Rueckert et al.

ICLR 2025posterarXiv:2505.15303

Large Convolutional Model Tuning via Filter Subspace

Wei Chen, Zichen Miao, Qiang Qiu

ICLR 2025posterarXiv:2403.00269

Large Language Models are Interpretable Learners

Ruochen Wang, Si Si, Felix Yu et al.

ICLR 2025posterarXiv:2406.17224

Large Language Models Assume People are More Rational than We Really are

Ryan Liu, Jiayi Geng, Joshua Peterson et al.

ICLR 2025posterarXiv:2406.17055
37
citations

Large Language Models can Become Strong Self-Detoxifiers

Ching-Yun Ko, Pin-Yu Chen, Payel Das et al.

ICLR 2025poster
3
citations

Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation

Chengwen Qi, Ren Ma, Bowen Li et al.

ICLR 2025posterarXiv:2502.06563
25
citations

Large Language Models Often Say One Thing and Do Another

Ruoxi Xu, Hongyu Lin, Xianpei Han et al.

ICLR 2025posterarXiv:2503.07003