ICML 2024 "foundation models" Papers
25 papers found
Active Label Correction for Semantic Segmentation with Foundation Models
Hoyoung Kim, SEHYUN HWANG, Suha Kwak et al.
Adversarially Robust Hypothesis Transfer Learning
Yunjuan Wang, Raman Arora
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi et al.
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling
Yair Schiff, Chia Hsiang Kao, Aaron Gokaslan et al.
Compute Better Spent: Replacing Dense Layers with Structured Matrices
Shikai Qiu, Andres Potapczynski, Marc Finzi et al.
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Dyah Adila, Shuai Zhang, Boran Han et al.
Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning
Kai Gan, Tong Wei
Let Go of Your Labels with Unsupervised Transfer
Artyom Gadetsky, Yulun Jiang, Maria Brbic
LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Yuji Roh, Qingyun Liu, Huan Gui et al.
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
Jiang-Xin Shi, Tong Wei, Zhi Zhou et al.
MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions
Kai Zhang, Yi Luan, Hexiang Hu et al.
MOMENT: A Family of Open Time-series Foundation Models
Mononito Goswami, Konrad Szafer, Arjun Choudhry et al.
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Theodore Papamarkou, Maria Skoularidou, Konstantina Palla et al.
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh
Position: On the Societal Impact of Open Foundation Models
Sayash Kapoor, Rishi Bommasani, Kevin Klyman et al.
Position: Open-Endedness is Essential for Artificial Superhuman Intelligence
Edward Hughes, Michael Dennis, Jack Parker-Holder et al.
Position: Towards Unified Alignment Between Agents, Humans, and Environment
Zonghan Yang, an liu, Zijun Liu et al.
Prompting is a Double-Edged Sword: Improving Worst-Group Robustness of Foundation Models
Amrith Setlur, Saurabh Garg, Virginia Smith et al.
Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective
Yang Chen, Cong Fang, Zhouchen Lin et al.
RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation
Yufei Wang, Zhou Xian, Feng Chen et al.
Towards Causal Foundation Model: on Duality between Optimal Balancing and Attention
Jiaqi Zhang, Joel Jennings, Agrin Hilmkil et al.
Transferring Knowledge From Large Foundation Models to Small Downstream Models
Shikai Qiu, Boran Han, Danielle Robinson et al.
UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning
Shikun Feng, Yuyan Ni, Li et al.
ViP: A Differentially Private Foundation Model for Computer Vision
Yaodong Yu, Maziar Sanjabi, Yi Ma et al.