ICLR Papers
6,124 papers found • Page 102 of 123
Local Composite Saddle Point Optimization
Site Bai, Brian Bullins
Local Graph Clustering with Noisy Labels
Artur Back de Luca, Kimon Fountoulakis, Shenghao Yang
Locality-Aware Graph Rewiring in GNNs
Federico Barbero, Ameya Velingker, Amin Saberi et al.
Locality Sensitive Sparse Encoding for Learning World Models Online
Zichen Liu, Chao Du, Wee Sun Lee et al.
Localizing and Editing Knowledge In Text-to-Image Generative Models
Samyadeep Basu, Nanxuan Zhao, Vlad Morariu et al.
Local Search GFlowNets
Minsu Kim, Yun Taeyoung, Emmanuel Bengio et al.
LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models
Yixiao Li, Yifan Yu, Chen Liang et al.
Logical Languages Accepted by Transformer Encoders with Hard Attention
Pablo Barcelo, Alexander Kozachinskiy, Anthony W. Lin et al.
LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints
Weidi Xu, Jingwei Wang, Lele Xie et al.
Login
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Yukang Chen, Shengju Qian, Haotian Tang et al.
Long-Short-Range Message-Passing: A Physics-Informed Framework to Capture Non-Local Interaction for Scalable Molecular Dynamics Simulation
Yunyang Li, Yusong Wang, Lin Huang et al.
Long-tailed Diffusion Models with Oriented Calibration
Tianjiao Zhang, Huangjie Zheng, Jiangchao Yao et al.
Long-Term Typhoon Trajectory Prediction: A Physics-Conditioned Approach Without Reanalysis Data
Young-Jae Park, Minseok Seo, Doyi Kim et al.
Look, Remember and Reason: Grounded Reasoning in Videos with Language Models
Apratim Bhattacharyya, Sunny Panchal, Reza Pourreza et al.
Looped Transformers are Better at Learning Learning Algorithms
Liu Yang, Kangwook Lee, Robert Nowak et al.
LOQA: Learning with Opponent Q-Learning Awareness
Milad Aghajohari, Juan Duque, Timotheus Cooijmans et al.
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents
Jae-Woo Choi, Youngwoo Yoon, Youngwoo Yoon et al.
Low Rank Matrix Completion via Robust Alternating Minimization in Nearly Linear Time
Yuzhou Gu, Zhao Song, Junze Yin et al.
lpNTK: Better Generalisation with Less Data via Sample Interaction During Learning
Shangmin Guo, YI REN, Stefano Albrecht et al.
LQ-LoRA: Low-rank plus Quantized Matrix Decomposition for Efficient Language Model Finetuning
Han Guo, Philip Greengard, Eric Xing et al.
LRM: Large Reconstruction Model for Single Image to 3D
Yicong Hong, Kai Zhang, Jiuxiang Gu et al.
LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks
Jianlang Chen, Xuhong Ren, Qing Guo et al.
LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth Limited Optical Signal Acquisition
Lingfeng Liu, Dong Ni, Hangjie Yuan
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models
Gunho Park, baeseong park, Minsub Kim et al.
M3C: A Framework towards Convergent, Flexible, and Unsupervised Learning of Mixture Graph Matching and Clustering
Jiaxin Lu, Zetian Jiang, Tianzhe Wang et al.
Machine Unlearning for Image-to-Image Generative Models
Guihong Li, Hsiang Hsu, Chun-Fu Chen et al.
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
Guocheng Qian, Jinjie Mai, Abdullah Hamdi et al.
MagicDrive: Street View Generation with Diverse 3D Geometry Control
Ruiyuan Gao, Kai Chen, Enze Xie et al.
MaGIC: Multi-modality Guided Image Completion
Hao Wang, Yongsheng Yu, Tiejian Luo et al.
Magnitude Invariant Parametrizations Improve Hypernetwork Learning
Jose Javier Gonzalez Ortiz, John Guttag, Adrian Dalca
Magnushammer: A Transformer-Based Approach to Premise Selection
Maciej Mikuła, Szymon Tworkowski, Szymon Antoniak et al.
Making LLaMA SEE and Draw with SEED Tokenizer
Yuying Ge, Sijie Zhao, Ziyun Zeng et al.
Making Pre-trained Language Models Great on Tabular Prediction
Jiahuan Yan, Bo Zheng, Hongxia Xu et al.
Making Retrieval-Augmented Language Models Robust to Irrelevant Context
Ori Yoran, Tomer Wolfson, Ori Ram et al.
Making RL with Preference-based Feedback Efficient via Randomization
Runzhe Wu, Wen Sun
MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning
Zohar Rimon, Tom Jurgenson, Orr Krupnik et al.
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
Xiang Yue, Xingwei Qu, Ge Zhang et al.
Manifold Diffusion Fields
Ahmed Elhag, Ahmed Elhag, Yuyang Wang et al.
Manifold Preserving Guided Diffusion
Yutong He, Naoki Murata, Chieh-Hsin Lai et al.
Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems
Jacob Prince, Gabriel Fajardo, George Alvarez et al.
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding
Lirong Wu, Yijun Tian, Yufei Huang et al.
MAP IT to Visualize Representations
Robert Jenssen
Mask-Based Modeling for Neural Radiance Fields
Ganlin Yang, Guoqiang Wei, Zhizheng Zhang et al.
Masked Audio Generation using a Single Non-Autoregressive Transformer
Alon Ziv, Itai Gat, Gael Le Lan et al.
Masked Autoencoders with Multi-Window Local-Global Attention Are Better Audio Learners
Sarthak Yadav, Sergios Theodoridis, Lars Kai Hansen et al.
Masked Completion via Structured Diffusion with White-Box Transformers
Druv Pai, Sam Buchanan, Ziyang Wu et al.
Masked Distillation Advances Self-Supervised Transformer Architecture Search
Caixia Yan, Xiaojun Chang, Zhihui Li et al.
Masked Structural Growth for 2x Faster Language Model Pre-training
Yiqun Yao, Zheng Zhang, Jing Li et al.
Masks, Signs, And Learning Rate Rewinding
Advait Gadhikar, Rebekka Burkholz