ICLR Poster Papers
5,330 papers found • Page 8 of 107
Beyond the convexity assumption: Realistic tabular data generation under quantifier-free real linear constraints
Mihaela Stoian, Eleonora Giunchiglia
Beyond Worst-Case Dimensionality Reduction for Sparse Vectors
Sandeep Silwal, David Woodruff, Qiuyi (Richard) Zhang
Bias Mitigation in Graph Diffusion Models
Meng Yu, Kun Zhan
Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models
Wenxuan Zhang, Philip Torr, Mohamed Elhoseiny et al.
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
Terry Yue Zhuo, Minh Chien Vu, Jenny Chim et al.
BigDocs: An Open Dataset for Training Multimodal Models on Document and Code Tasks
Juan A. Rodriguez, Xiangru Jian, Siba Smarak Panigrahi et al.
BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities
Shaozhe Hao, Xuantong LIU, Xianbiao Qi et al.
Bilinear MLPs enable weight-based mechanistic interpretability
Michael Pearce, Thomas Dooms, Alice Rigg et al.
BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models
Xingyu Zheng, Xianglong Liu, Haotong Qin et al.
Binary Losses for Density Ratio Estimation
Werner Zellinger
BingoGuard: LLM Content Moderation Tools with Risk Levels
Fan Yin, Philippe Laban, XIANGYU PENG et al.
BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation Experiments
Yusuf Roohani, Andrew Lee, Qian Huang et al.
Biologically Plausible Brain Graph Transformer
Ciyuan Peng, Yuelong Huang, Qichao Dong et al.
Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences
Niklas Schmidinger, Lisa Schneckenreiter, Philipp Seidl et al.
BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models
Yu Feng, Ben Zhou, Weidong Lin et al.
BirdSet: A Large-Scale Dataset for Audio Classification in Avian Bioacoustics
Lukas Rauch, Raphael Schwinger, Moritz Wirth et al.
Bisimulation Metric for Model Predictive Control
Yutaka Shimizu, Masayoshi Tomizuka
BitStack: Any-Size Compression of Large Language Models in Variable Memory Environments
Xinghao Wang, Pengyu Wang, Bo Wang et al.
Black-Box Detection of Language Model Watermarks
Thibaud Gloaguen, Nikola Jovanović, Robin Staab et al.
Black Sheep in the Herd: Playing with Spuriously Correlated Attributes for Vision-Language Recognition
Xinyu Tian, Shu Zou, Zhaoyuan Yang et al.
BlendRL: A Framework for Merging Symbolic and Neural Policy Learning
Hikaru Shindo, Quentin Delfosse, Devendra Singh Dhami et al.
Block-Attention for Efficient Prefilling
Dongyang Ma, Yan Wang, Tian Lan
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
Marianne Arriola, Aaron Gokaslan, Justin Chiu et al.
Block Verification Accelerates Speculative Decoding
Ziteng Sun, Uri Mendlovic, Yaniv Leviathan et al.
BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks
Yunhan Zhao, Xiang Zheng, Lin Luo et al.
BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL
Yu Heng Hung, Kai-Jie Lin, Yu-Heng Lin et al.
Boltzmann-Aligned Inverse Folding Model as a Predictor of Mutational Effects on Protein-Protein Interactions
Xiaoran Jiao, Weian Mao, Wengong Jin et al.
Boltzmann priors for Implicit Transfer Operators
Juan Viguera Diez, Mathias Schreiner, Ola Engkvist et al.
Boltzmann Semantic Score: A Semantic Metric for Evaluating Large Vision Models Using Large Language Models
Ali Khajegili Mirabadi, Katherine Rich, Hossein Farahani et al.
BOND: Aligning LLMs with Best-of-N Distillation
Pier Giuseppe Sessa, Robert Dadashi, Léonard Hussenot-Desenonges et al.
BoneMet: An Open Large-Scale Multi-Modal Murine Dataset for Breast Cancer Bone Metastasis Diagnosis and Prognosis
Tiankuo Chu, Fudong Lin, Shubo Wang et al.
Bonsai: Gradient-free Graph Condensation for Node Classification
Mridul Gupta, Samyak Jain, Vansh Ramani et al.
Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation
Tiansheng Huang, Sihao Hu, Fatih Ilhan et al.
Boosting Latent Diffusion with Perceptual Objectives
Tariq Berrada, Pietro Astolfi, Melissa Hall et al.
Boosting Methods for Interval-censored Data with Regression and Classification
Yuan Bian, Grace Yi, Wenqing He
Boosting Multiple Views for pretrained-based Continual Learning
Quyen Tran, Tung Lam Tran, Khanh Doan et al.
Boosting Neural Combinatorial Optimization for Large-Scale Vehicle Routing Problems
Fu Luo, Xi Lin, Yaoxin Wu et al.
Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games
Kenshi Abe, Mitsuki Sakamoto, Kaito Ariu et al.
Boosting Ray Search Procedure of Hard-label Attacks with Transfer-based Priors
Chen Ma, Xinjie Xu, Shuyu Cheng et al.
Boosting the visual interpretability of CLIP via adversarial fine-tuning
Shizhan Gong, Haoyu LEI, Qi Dou et al.
Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation
Sheng-Feng Yu, Jia-Jiun Yao, Wei-Chen Chiu
Bootstrapped Model Predictive Control
Yuhang Wang, Hanwei Guo, Sizhe Wang et al.
Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel
Zun Wang, Jialu Li, Yicong Hong et al.
Bootstrapping Language Models with DPO Implicit Rewards
Changyu Chen, Zichen Liu, Chao Du et al.
Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation
Peiwen Sun, Sitong Cheng, Xiangtai Li et al.
Boundary constrained Gaussian processes for robust physics-informed machine learning of linear partial differential equations
David Dalton, Alan Lazarus, Hao Gao et al.
Bounds on $L_p$ Errors in Density Ratio Estimation via $f$-Divergence Loss Functions
Yoshiaki Kitazawa
BP-Modified Local Loss for Efficient Training of Deep Neural Networks
REN Lianhai, Qianxiao Li
BrainACTIV: Identifying visuo-semantic properties driving cortical selectivity using diffusion-based image manipulation
Diego García Cerdas, Christina Sartzetaki, Magnus Petersen et al.
Brain Bandit: A Biologically Grounded Neural Network for Efficient Control of Exploration
Chen Jiang, Jiahui An, Yating Liu et al.