ICML 2024 "transfer learning" Papers
21 papers found
${\rm E}(3)$-Equivariant Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
Dingyang Chen, Qi Zhang
Bridging Data Gaps in Diffusion Models with Adversarial Noise-Based Transfer Learning
Xiyu Wang, Baijiong Lin, Daochang Liu et al.
CARTE: Pretraining and Transfer for Tabular Learning
Myung Jun Kim, Leo Grinsztajn, Gael Varoquaux
DNA-SE: Towards Deep Neural-Nets Assisted Semiparametric Estimation
Qinshuo Liu, Zixin Wang, Xi'an Li et al.
Encodings for Prediction-based Neural Architecture Search
Yash Akhauri, Mohamed Abdelfattah
Feature Reuse and Scaling: Understanding Transfer Learning with Protein Language Models
Francesca-Zhoufan Li, Ava Amini, Yisong Yue et al.
Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem
Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski et al.
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics
Ankit Vani, Frederick Tung, Gabriel Oliveira et al.
Graph Positional and Structural Encoder
Semih Cantürk, Renming Liu, Olivier Lapointe-Gagné et al.
Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples
Thomas T. Zhang, Bruce Lee, Ingvar Ziemann et al.
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective
Lei Zhao, Mengdi Wang, Yu Bai
Learning from Memory: Non-Parametric Memory Augmented Self-Supervised Learning of Visual Features
Thalles Silva, Helio Pedrini, Adín Ramírez Rivera
Matrix Information Theory for Self-Supervised Learning
Yifan Zhang, Zhiquan Tan, Jingqin Yang et al.
MF-CLR: Multi-Frequency Contrastive Learning Representation for Time Series
Jufang Duan, Wei Zheng, Yangzhou Du et al.
Minimum-Norm Interpolation Under Covariate Shift
Neil Mallinar, Austin Zane, Spencer Frei et al.
On Hypothesis Transfer Learning of Functional Linear Models
Haotian Lin, Matthew Reimherr
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
Florian Tramer, Gautam Kamath, Nicholas Carlini
Position: Will we run out of data? Limits of LLM scaling based on human-generated data
Pablo Villalobos, Anson Ho, Jaime Sevilla et al.
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
Md Shamim Hussain, Mohammed Zaki, Dharmashankar Subramanian
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
Shengzhuang Chen, Jihoon Tack, Yunqiao Yang et al.
When is Transfer Learning Possible?
My Phan, Kianté Brantley, Stephanie Milani et al.