ICLR Papers
6,124 papers found • Page 25 of 123
FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Zhipei Xu, Xuanyu Zhang, Runyi Li et al.
Fantastic Copyrighted Beasts and How (Not) to Generate Them
Luxi He, Yangsibo Huang, Weijia Shi et al.
Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them
Anh Bui, Thuy-Trang Vu, Long Vuong et al.
Fast and Accurate Blind Flexible Docking
Zizhuo Zhang, Lijun Wu, Kaiyuan Gao et al.
Fast and Slow Streams for Online Time Series Forecasting Without Information Leakage
Ying-yee Ava Lau, Zhiwen Shao, Dit-Yan Yeung
Fast Direct: Query-Efficient Online Black-box Guidance for Diffusion-model Target Generation
Kim Yong Tan, YUEMING LYU, Ivor Tsang et al.
Faster Algorithms for Structured Linear and Kernel Support Vector Machines
Yuzhou Gu, Zhao Song, Lichen Zhang
FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality
Zhengyao Lyu, Chenyang Si, Junhao Song et al.
Faster Cascades via Speculative Decoding
Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat et al.
Faster Diffusion Sampling with Randomized Midpoints: Sequential and Parallel
Shivam Gupta, Linda Cai, Sitan Chen
Faster Inference of Flow-Based Generative Models via Improved Data-Noise Coupling
Aram Davtyan, Leello Dadi, Volkan Cevher et al.
Fast Feedforward 3D Gaussian Splatting Compression
Yihang Chen, Qianyi Wu, Mengyao Li et al.
Fast Summation of Radial Kernels via QMC Slicing
Johannes Hertrich, Tim Jahn, Michael Quellmalz
Fast training and sampling of Restricted Boltzmann Machines
Nicolas BEREUX, Aurélien Decelle, Cyril Furtlehner et al.
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom, Sangyoon Lee, Jaeho Lee
Fast Uncovering of Protein Sequence Diversity from Structure
Luca Alessandro Silva, Barthelemy Meynard-Piganeau, Carlo Lucibello et al.
Fast unsupervised ground metric learning with tree-Wasserstein distance
Kira Michaela Düsterwald, Samo Hromadka, Makoto Yamada
Fat-to-Thin Policy Optimization: Offline Reinforcement Learning with Sparse Policies
Lingwei Zhu, Han Wang, Yukie Nagai
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models
Gen Luo, Yiyi Zhou, Yuxin Zhang et al.
Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks
Binghui Li, Zhixuan Pan, Kaifeng Lyu et al.
Feature-Based Online Bilateral Trade
Solenne Gaucher, Martino Bernasconi, Matteo Castiglioni et al.
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Seung Hyun Cheon, Anneke Wernerfelt, Sorelle Friedler et al.
Federated $Q$-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost
Zhong Zheng, Haochen Zhang, Lingzhou Xue
Federated Class-Incremental Learning: A Hybrid Approach Using Latent Exemplars and Data-Free Techniques to Address Local and Global Forgetting
Milad Khademi Nori, IL-MIN KIM, Guanghui Wang
Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond
Giuseppe Serra, Florian Buettner
Federated Domain Generalization with Data-free On-server Matching Gradient
Binh Nguyen, Minh-Duong Nguyen, Jinsun Park et al.
Federated Few-Shot Class-Incremental Learning
Muhammad Anwar Masum, Mahardhika Pratama, Lin Liu et al.
Federated Granger Causality Learning For Interdependent Clients With State Space Representation
Ayush Mohanty, Nazal Mohamed, Paritosh Ramanan et al.
Federated Residual Low-Rank Adaption of Large Language Models
Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.
FedLWS: Federated Learning with Adaptive Layer-wise Weight Shrinking
Changlong Shi, Jinmeng Li, He Zhao et al.
FedTMOS: Efficient One-Shot Federated Learning with Tsetlin Machine
Shannon How, Jagmohan Chauhan, Geoff Merrett et al.
Feedback Favors the Generalization of Neural ODEs
Jindou Jia, Zihan Yang, Meng Wang et al.
Feedback Schrödinger Bridge Matching
Panagiotis Theodoropoulos, Nikolaos Komianos, Vincent Pacelli et al.
Fengbo: a Clifford Neural Operator pipeline for 3D PDEs in Computational Fluid Dynamics
Alberto Pepe, Mattia Montanari, Joan Lasenby
Ferret-UI 2: Mastering Universal User Interface Understanding Across Platforms
Zhangheng LI, Keen You, Haotian Zhang et al.
Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement
Bryan Bo Cao, Lawrence OGorman, Michael Coss et al.
Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset
Yiqin Yang, Quanwei Wang, Chenghao Li et al.
Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization
Xi Lin, Yilu Liu, Xiaoyuan Zhang et al.
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng, Farhad Shirani, Zhuomin Chen et al.
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
Yujian Liu, Shiyu Chang, Tommi Jaakkola et al.
Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models
Keisuke Kamahori, Tian Tang, Yile Gu et al.
Field-DiT: Diffusion Transformer on Unified Video, 3D, and Game Field Generation
Kangfu Mei, Mo Zhou, Vishal Patel
FIG: Flow with Interpolant Guidance for Linear Inverse Problems
Yici Yan, Yichi Zhang, XIANGMING MENG et al.
Filtered not Mixed: Filtering-Based Online Gating for Mixture of Large Language Models
Raeid Saqur, Anastasis Kratsios, Florian Krach et al.
Finally Rank-Breaking Conquers MNL Bandits: Optimal and Efficient Algorithms for MNL Assortment
Aadirupa Saha, Pierre Gaillard
Find A Winning Sign: Sign Is All We Need to Win the Lottery
Junghun Oh, Sungyong Baik, Kyoung Mu Lee
Finding and Only Finding Differential Nash Equilibria by Both Pretending to be a Follower
Guodong Zhang, Xuchan Bao
Finding Shared Decodable Concepts and their Negations in the Brain
Cory Efird, Alex Murphy, Joel Zylberberg et al.
Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment
Chenhang Cui, An Zhang, Yiyang Zhou et al.
Fine-Tuning Attention Modules Only: Enhancing Weight Disentanglement in Task Arithmetic
Ruochen Jin, Bojian Hou, Jiancong Xiao et al.