ICLR Papers
6,124 papers found • Page 24 of 123
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
Maxime Méloux, Silviu Maniu, François Portet et al.
Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models
Jingcheng Deng, Zihao Wei, Liang Pang et al.
Evidential Learning-based Certainty Estimation for Robust Dense Feature Matching
Lile Cai, Chuan Sheng Foo, Xun Xu et al.
Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensembles
Buu Phan, Brandon Amos, Itai Gat et al.
Exact Certification of (Graph) Neural Networks Against Label Poisoning
Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann et al.
Exact Community Recovery under Side Information: Optimality of Spectral Algorithms
Julia Gaudio, Nirmit Joshi
Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks
Maximilian Muschalik, Fabian Fumagalli, Paolo Frazzetto et al.
ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning
Xiao Yu, Baolin Peng, Vineeth Vajipey et al.
Examining Alignment of Large Language Models through Representative Heuristics: the case of political stereotypes
Sullam Jeoung, Yubin Ge, Haohan Wang et al.
Execution-guided within-prompt search for programming-by-example
Gust Verbruggen, Ashish Tiwari, Mukul Singh et al.
Expand and Compress: Exploring Tuning Principles for Continual Spatio-Temporal Graph Forecasting
Wei Chen, Yuxuan Liang
Expected Return Symmetries
Darius Muglich, Johannes Forkel, Elise van der Pol et al.
Expected Sliced Transport Plans
Xinran Liu, Rocio Diaz Martin, Yikun Bai et al.
Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation
Itamar Zimerman, ameen ali ali, Lior Wolf
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient Reasons
Shahaf Bassan, Ron Eliav, Shlomit Gur
Explanations of GNN on Evolving Graphs via Axiomatic Layer edges
Yazheng Liu, Sihong Xie
Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval
Mohammad Omama, Po-han Li, Sandeep Chinchali
Exploiting Hankel-Toeplitz Structures for Fast Computation of Kernel Precision Matrices
Frida Viset, Frederiek Wesel, Arno Solin et al.
Exploiting Hidden Symmetry to Improve Objective Perturbation for DP Linear Learners with a Nonsmooth L1-Norm
Du Chen, Geoffrey A. Chua
Exploiting Structure in Offline Multi-Agent RL: The Benefits of Low Interaction Rank
Wenhao Zhan, Scott Fujimoto, Zheqing Zhu et al.
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
Tengyang Xie, Dylan Foster, Akshay Krishnamurthy et al.
Explore Theory of Mind: program-guided adversarial data generation for theory of mind reasoning
Melanie Sclar, Jane Dwivedi-Yu, Maryam Fazel-Zarandi et al.
Exploring a Principled Framework for Deep Subspace Clustering
Xianghan Meng, Zhiyuan Huang, Wei He et al.
Exploring channel distinguishability in local neighborhoods of the model space in quantum neural networks
Sabrina Herbst, Sandeep Cranganore, Vincenzo De Maio et al.
Exploring Learning Complexity for Efficient Downstream Dataset Pruning
Wenyu Jiang, Zhenlong Liu, Zejian Xie et al.
Exploring Local Memorization in Diffusion Models via Bright Ending Attention
Chen Chen, Daochang Liu, Mubarak Shah et al.
Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View
Xuan Liu, Jie ZHANG, HaoYang Shang et al.
Exploring the Camera Bias of Person Re-identification
Myungseo Song, Jin-Woo Park, Jong-Seok Lee
Exploring the Design Space of Visual Context Representation in Video MLLMs
Yifan Du, Yuqi Huo, Kun Zhou et al.
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models
Amir Mohammad Karimi Mamaghan, Samuele Papa, Karl H. Johansson et al.
Exploring The Forgetting in Adversarial Training: A Novel Method for Enhancing Robustness
Xianglu Wang, Hu Ding
Exploring The Loss Landscape Of Regularized Neural Networks Via Convex Duality
Sungyoon Kim, Aaron Mishkin, Mert Pilanci
Exponential Topology-enabled Scalable Communication in Multi-agent Reinforcement Learning
Xinran Li, Xiaolu Wang, Chenjia Bai et al.
Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models
Aniruddha Kembhavi, Mohit Bansal, Amita Kamath et al.
Exposure Bracketing Is All You Need For A High-Quality Image
Zhilu Zhang, Shuohao Zhang, Renlong Wu et al.
Expressivity of Neural Networks with Random Weights and Learned Biases
Ezekiel Williams, Alexandre Payeur, Avery Ryoo et al.
Extendable and Iterative Structure Learning Strategy for Bayesian Networks
Hamid Kalantari, Russell Greiner, Pouria Ramazi
Extending Mercer's expansion to indefinite and asymmetric kernels
Sungwoo Jeong, Alex Townsend
Extreme Risk Mitigation in Reinforcement Learning using Extreme Value Theory
Jan Drgona, Mahantesh Halappanavar, Frank Liu et al.
FaceShot: Bring Any Character into Life
Junyao Gao, Yanan Sun, Fei Shen et al.
Facilitating Multi-turn Function Calling for LLMs via Compositional Instruction Tuning
Mingyang Chen, sunhaoze, Tianpeng Li et al.
Factor Graph-based Interpretable Neural Networks
Yicong Li, Kuanjiu Zhou, Shuo Yu et al.
FACTS: A Factored State-Space Framework for World Modelling
Li Nanbo, Firas Laakom, Yucheng XU et al.
Factual Context Validation and Simplification: A Scalable Method to Enhance GPT Trustworthiness and Efficiency
Tianyi Huang
Failures to Find Transferable Image Jailbreaks Between Vision-Language Models
Rylan Schaeffer, Dan Valentine, Luke Bailey et al.
Fair Clustering in the Sliding Window Model
Vincent Cohen-Addad, Shaofeng Jiang, Qiaoyuan Yang et al.
FairDen: Fair Density-Based Clustering
Lena Krieger, Anna Beer, Pernille Matthews et al.
FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in Conversational LLMs
Zhiting Fan, Ruizhe Chen, Tianxiang Hu et al.
Fair Submodular Cover
Wenjing Chen, Shuo Xing, Samson Zhou et al.
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
Yifei Ming, Senthil Purushwalkam, Shrey Pandit et al.