ICLR Papers
6,124 papers found • Page 36 of 123
KAA: Kolmogorov-Arnold Attention for Enhancing Attentive Graph Neural Networks
Taoran Fang, Tianhong Gao, Chunping Wang et al.
KAN: Kolmogorov–Arnold Networks
Ziming Liu, Yixuan Wang, Sachin Vaidya et al.
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
Fan Wang, Juyong Jiang, Chansung Park et al.
KBLaM: Knowledge Base augmented Language Model
Xi Wang, Taketomo Isazawa, Liana Mikaelyan et al.
Kernel-based Optimally Weighted Conformal Time-Series Prediction
Jonghyeok Lee, Chen Xu, Yao Xie
KGARevion: An AI Agent for Knowledge-Intensive Biomedical QA
Xiaorui Su, Yibo Wang, Shanghua Gao et al.
K-HALU: Multiple Answer Korean Hallucination Benchmark for Large Language Models
Jaehyung Seo, Heuiseok Lim
Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks
Michael Matthews, Michael Beukman, Chris Lu et al.
KinFormer: Generalizable Dynamical Symbolic Regression for Catalytic Organic Reaction Kinetics
Jindou Chen, Jidong Tian, Liang Wu et al.
KinPFN: Bayesian Approximation of RNA Folding Kinetics using Prior-Data Fitted Networks
Dominik Scheuer, Frederic Runge, Jörg Franke et al.
KiVA: Kid-inspired Visual Analogies for Testing Large Multimodal Models
Eunice Yiu, Maan Qraitem, Anisa Majhi et al.
KLay: Accelerating Arithmetic Circuits for Neurosymbolic AI
Jaron Maene, Vincent Derkinderen, Pedro Zuidberg Dos Martires
kNN Attention Demystified: A Theoretical Exploration for Scalable Transformers
Themistoklis Haris
Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding
Xin Gu, Yaojie Shen, Chenxi Luo et al.
Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Simiao Li, Yun Zhang, Wei Li et al.
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
Jiyeon Kim, Hyunji Lee, Hyowon Cho et al.
Knowledge Graph Finetuning Enhances Knowledge Manipulation in Large Language Models
Hanzhu Chen, Xu Shen, Jie Wang et al.
Knowledge Localization: Mission Not Accomplished? Enter Query Localization!
Yuheng Chen, Pengfei Cao, Yubo Chen et al.
Kolmogorov-Arnold Transformer
Xingyi Yang, Xinchao Wang
KooNPro: A Variance-Aware Koopman Probabilistic Model Enhanced by Neural Process for Time Series Forecasting
Ronghua Zheng, Hanru Bai, Weiyang Ding
KOR-Bench: Benchmarking Language Models on Knowledge-Orthogonal Reasoning Tasks
Kaijing Ma, Xeron Du, Yunran Wang et al.
Kronecker Mask and Interpretive Prompts are Language-Action Video Learners
Jingyi Yang, Zitong YU, Nixiuming et al.
L3Ms — Lagrange Large Language Models
Guneet Singh Dhillon, Xingjian Shi, Yee Whye Teh et al.
LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion
Biao Zhang, Peter Wonka
Lambda-Skip Connections: the architectural component that prevents Rank Collapse
Federico Arangath Joseph, Jerome Sieber, Melanie Zeilinger et al.
LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement
Zijie Geng, Jie Wang, Ziyan Liu et al.
LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning
Zhe Li, Weihao Yuan, Yisheng He et al.
LancBiO: Dynamic Lanczos-aided Bilevel Optimization via Krylov Subspace
Yan Yang, Bin Gao, Ya-xiang Yuan
Langevin Soft Actor-Critic: Efficient Exploration through Uncertainty-Driven Critic Learning
Haque Ishfaq, Guangyuan Wang, Sami Islam et al.
Language Agents Meet Causality -- Bridging LLMs and Causal World Models
John Gkountouras, Matthias Lindemann, Phillip Lippe et al.
Language-Assisted Feature Transformation for Anomaly Detection
EungGu Yun, Heonjin Ha, Yeongwoo Nam et al.
Language Guided Skill Discovery
Seungeun Rho, Laura Smith, Tianyu Li et al.
Language-Image Models with 3D Understanding
Jang Hyun Cho, Boris Ivanovic, Yulong Cao et al.
Language Imbalance Driven Rewarding for Multilingual Self-improving
Wen Yang, Junhong Wu, Chen Wang et al.
Language Model Alignment in Multilingual Trolley Problems
Zhijing Jin, Max Kleiman-Weiner, Giorgio Piatti et al.
Language Models are Advanced Anonymizers
Robin Staab, Mark Vero, Mislav Balunovic et al.
Language Models Are Implicitly Continuous
Samuele Marro, Davide Evangelista, X. Huang et al.
Language Models Learn to Mislead Humans via RLHF
Jiaxin Wen, Ruiqi Zhong, Akbir Khan et al.
Language Models Need Inductive Biases to Count Inductively
Yingshan Chang, Yonatan Bisk
Language models scale reliably with over-training and on downstream tasks
Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar et al.
Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice
Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths
Language Representations Can be What Recommenders Need: Findings and Potentials
Leheng Sheng, An Zhang, Yi Zhang et al.
LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Doohyuk Jang, Sihwan Park, June Yong Yang et al.
Laplace Sample Information: Data Informativeness Through a Bayesian Lens
Johannes Kaiser, Kristian Schwethelm, Daniel Rueckert et al.
Large Convolutional Model Tuning via Filter Subspace
Wei Chen, Zichen Miao, Qiang Qiu
Large Language Models are Interpretable Learners
Ruochen Wang, Si Si, Felix Yu et al.
Large Language Models Assume People are More Rational than We Really are
Ryan Liu, Jiayi Geng, Joshua Peterson et al.
Large Language Models can Become Strong Self-Detoxifiers
Ching-Yun Ko, Pin-Yu Chen, Payel Das et al.
Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation
Chengwen Qi, Ren Ma, Bowen Li et al.
Large Language Models Often Say One Thing and Do Another
Ruoxi Xu, Hongyu Lin, Xianpei Han et al.