ICLR 2024 Papers
2,297 papers found • Page 43 of 46
Towards Foundation Models for Knowledge Graph Reasoning
Mikhail Galkin, Xinyu Yuan, Hesham Mostafa et al.
Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection
Fan Shi, Bin Li, Xiangyang Xue
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation
Kai Huang, Hanyun Yin, Heng Huang et al.
Towards Identifiable Unsupervised Domain Translation: A Diversified Distribution Matching Approach
Sagar Shrestha, Xiao Fu
Towards image compression with perfect realism at ultra-low bitrates
Marlene Careil, Matthew J Muckley, Jakob Verbeek et al.
Towards Imitation Learning to Branch for MIP: A Hybrid Reinforcement Learning based Sample Augmentation Approach
Changwen Zhang, wenli ouyang, Hao Yuan et al.
Towards LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark
Yehui Tang, Hao Xiong, Nianzu Yang et al.
Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
Ziyao Guo, Kai Wang, George Cazenavette et al.
Towards Meta-Pruning via Optimal Transport
Alexander Theus, Olin Geimer, Friedrich Wicke et al.
Towards Non-Asymptotic Convergence for Diffusion-Based Generative Models
Gen Li, Yuting Wei, Yuxin Chen et al.
Towards Offline Opponent Modeling with In-context Learning
Yuheng Jing, Kai Li, Bingyun Liu et al.
Towards Optimal Feature-Shaping Methods for Out-of-Distribution Detection
Qinyu Zhao, Ming Xu, Kartik Gupta et al.
Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback
Haolin Liu, Chen-Yu Wei, Julian Zimmert
Towards Poisoning Fair Representations
Tianci Liu, Haoyu Wang, Feijie Wu et al.
Towards Principled Representation Learning from Videos for Reinforcement Learning
Dipendra Kumar Misra, Akanksha Saran, Tengyang Xie et al.
Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features
Xiong Xu, Kunzhe Huang, Yiming Li et al.
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation
Yaofo Chen, Shuaicheng Niu, Yaowei Wang et al.
Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Xu Zheng, Farhad Shirani, Tianchun Wang et al.
Towards Robust Multi-Modal Reasoning via Model Selection
Xiangyan Liu, Rongxue LI, Wei Ji et al.
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption
Rui Yang, Han Zhong, Jiawei Xu et al.
Towards Robust Out-of-Distribution Generalization Bounds via Sharpness
Yingtian Zou, Kenji Kawaguchi, Yingnan Liu et al.
Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition
Feng Lu, Lijun Zhang, Xiangyuan Lan et al.
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Qingyue Zhao, Banghua Zhu
Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Alexandru Meterez, Amir Joudaki, Francesco Orabona et al.
Towards Transparent Time Series Forecasting
Krzysztof Kacprzyk, Tennison Liu, Mihaela van der Schaar
Toward Student-oriented Teacher Network Training for Knowledge Distillation
Chengyu Dong, Liyuan Liu, Jingbo Shang
Towards Understanding Factual Knowledge of Large Language Models
Xuming Hu, Junzhe Chen, Xiaochuan Li et al.
Towards Understanding Sycophancy in Language Models
Mrinank Sharma, Meg Tong, Tomek Korbak et al.
Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Tianxin Wei, Bowen Jin, Ruirui Li et al.
Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy
Yingyu Lin, Yian Ma, Yu-Xiang Wang et al.
Tractable Probabilistic Graph Representation Learning with Graph-Induced Sum-Product Networks
Federico Errica, Mathias Niepert
Training Bayesian Neural Networks with Sparse Subspace Variational Inference
Junbo Li, Zichen Miao, Qiang Qiu et al.
Training Diffusion Models with Reinforcement Learning
Kevin Black, Michael Janner, Yilun Du et al.
Training-free Multi-objective Diffusion Model for 3D Molecule Generation
XU HAN, Caihua Shan, Yifei Shen et al.
Training Graph Transformers via Curriculum-Enhanced Attention Distillation
Yisong Huang, Jin Li, Xinlong Chen et al.
Training Socially Aligned Language Models on Simulated Social Interactions
Ruibo Liu, Ruixin Yang, Chenyan Jia et al.
Training Unbiased Diffusion Models From Biased Dataset
Yeongmin Kim, Byeonghu Na, Minsang Park et al.
Trajeglish: Traffic Modeling as Next-Token Prediction
Jonah Philion, Xue Bin Peng, Sanja Fidler
TRAM: Bridging Trust Regions and Sharpness Aware Minimization
Tom Sherborne, Naomi Saphra, Pradeep Dasigi et al.
Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets
Yuan-Hong Liao, David Acuna, Rafid Mahmood et al.
Transferring Learning Trajectories of Neural Networks
Daiki Chijiwa
Transformer Fusion with Optimal Transport
Moritz Imfeld, Jacopo Graldi, Marco Giordano et al.
Transformer-Modulated Diffusion Models for Probabilistic Multivariate Time Series Forecasting
Yuxin Li, Wenchao Chen, Xinyue Hu et al.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
Licong Lin, Yu Bai, Song Mei
Transformers can optimally learn regression mixture models
Reese Pathak, Rajat Sen, Weihao Kong et al.
Transformer-VQ: Linear-Time Transformers via Vector Quantization
Lucas D. Lingle
Transport meets Variational Inference: Controlled Monte Carlo Diffusions
Francisco Vargas, Shreyas Padhy, Denis Blessing et al.
Traveling Waves Encode The Recent Past and Enhance Sequence Learning
T. Anderson Keller, Lyle Muller, Terrence Sejnowski et al.
Treatment Effects Estimation By Uniform Transformer
Ruoqi Yu, Shulei Wang
Tree Cross Attention
Leo Feng, Frederick Tung, Hossein Hajimirsadeghi et al.