"representation learning" Papers
62 papers found • Page 1 of 2
$\mathbb{X}$-Sample Contrastive Loss: Improving Contrastive Learning with Sample Similarity Graphs
Vlad Sobal, Mark Ibrahim, Randall Balestriero et al.
AmorLIP: Efficient Language-Image Pretraining via Amortization
Haotian Sun, Yitong Li, Yuchen Zhuang et al.
A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics
Licong Lin, Song Mei
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Jorge Loría, Anindya Bhadra
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
Ivan Butakov, Alexander Semenenko, Alexander Tolmachev et al.
How Classifier Features Transfer to Downstream: An Asymptotic Analysis in a Two-Layer Model
HEE BIN YOO, Sungyoon Lee, Cheongjae Jang et al.
How Far Are We from True Unlearnability?
Kai Ye, Liangcai Su, Chenxiong Qian
OGBench: Benchmarking Offline Goal-Conditioned RL
Seohong Park, Kevin Frans, Benjamin Eysenbach et al.
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric Michaud, Asher Parker-Sartori, Max Tegmark
On the Feature Learning in Diffusion Models
Andi Han, Wei Huang, Yuan Cao et al.
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models
Young Kyun Jang, Ser-Nam Lim
Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models
Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.
Adaptive Discovering and Merging for Incremental Novel Class Discovery
Guangyao Chen, Peixi Peng, Yangru Huang et al.
A Global Geometric Analysis of Maximal Coding Rate Reduction
Peng Wang, Huikang Liu, Druv Pai et al.
An Unsupervised Approach for Periodic Source Detection in Time Series
Berken Utku Demirel, Christian Holz
Autoencoding Conditional Neural Processes for Representation Learning
Victor Prokhorov, Ivan Titov, Siddharth N
BaCon: Boosting Imbalanced Semi-supervised Learning via Balanced Feature-Level Contrastive Learning
Qianhan Feng, Lujing Xie, Shijie Fang et al.
BeigeMaps: Behavioral Eigenmaps for Reinforcement Learning from Images
Sandesh Adhikary, Anqi Li, Byron Boots
Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning
Yanqi Ge, Qiang Nie, Ye Huang et al.
Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains
Kyungeun Lee, Ye Seul Sim, Hye-Seung Cho et al.
Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based Losses
Panagiotis Koromilas, Giorgos Bouritsas, Theodoros Giannakopoulos et al.
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation
Jiyong Li, Dilshod Azizov, Yang LI et al.
Contrastive Learning for Clinical Outcome Prediction with Partial Data Sources
Xia, Jonathan Wilson, Benjamin Goldstein et al.
Cross-Domain Policy Adaptation by Capturing Representation Mismatch
Jiafei Lyu, Chenjia Bai, Jing-Wen Yang et al.
Data-to-Model Distillation: Data-Efficient Learning Framework
Ahmad Sajedi, Samir Khaki, Lucy Z. Liu et al.
Deep Regression Representation Learning with Topology
Shihao Zhang, Kenji Kawaguchi, Angela Yao
Differentially Private Representation Learning via Image Captioning
Tom Sander, Yaodong Yu, Maziar Sanjabi et al.
Diffusion Language Models Are Versatile Protein Learners
Xinyou Wang, Zaixiang Zheng, Fei YE et al.
Distribution Alignment Optimization through Neural Collapse for Long-tailed Classification
Jintong Gao, He Zhao, Dandan Guo et al.
DySeT: a Dynamic Masked Self-distillation Approach for Robust Trajectory Prediction
MOZHGAN POURKESHAVARZ, Arielle Zhang, Amir Rasouli
Enhancing Trajectory Prediction through Self-Supervised Waypoint Distortion Prediction
Pranav Singh Chib, Pravendra Singh
Exploring Diverse Representations for Open Set Recognition
Yu Wang, Junxian Mu, Pengfei Zhu et al.
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Feasibility Consistent Representation Learning for Safe Reinforcement Learning
Zhepeng Cen, Yihang Yao, Zuxin Liu et al.
Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize
Tianren Zhang, Chujie Zhao, Guanyu Chen et al.
FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data
Shusen Jing, Anlan Yu, Shuai Zhang et al.
Graph2Tac: Online Representation Learning of Formal Math Concepts
Lasse Blaauwbroek, Mirek Olšák, Jason Rute et al.
How Learning by Reconstruction Produces Uninformative Features For Perception
Randall Balestriero, Yann LeCun
InterLUDE: Interactions between Labeled and Unlabeled Data to Enhance Semi-Supervised Learning
Zhe Huang, Xiaowei Yu, Dajiang Zhu et al.
Isometric Representation Learning for Disentangled Latent Space of Diffusion Models
Jaehoon Hahm, Junho Lee, Sunghyun Kim et al.
Learning Shadow Variable Representation for Treatment Effect Estimation under Collider Bias
Baohong Li, Haoxuan Li, Ruoxuan Xiong et al.
LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Yuji Roh, Qingyun Liu, Huan Gui et al.
Matrix Information Theory for Self-Supervised Learning
Yifan Zhang, Zhiquan Tan, Jingqin Yang et al.
MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence
Hongduan Tian, Feng Liu, Tongliang Liu et al.
Neural Causal Abstractions
Kevin Xia, Elias Bareinboim
Neural Collapse meets Differential Privacy: Curious behaviors of NoisyGD with Near-Perfect Representation Learning
Chendi Wang, Yuqing Zhu, Weijie Su et al.
Non-parametric Representation Learning with Kernels
Hebaixu Wang, Meiqi Gong, Xiaoguang Mei et al.
Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Chun-Yin Huang, Kartik Srinivas, Xin Zhang et al.
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi et al.
Provable Representation with Efficient Planning for Partially Observable Reinforcement Learning
Hongming Zhang, Tongzheng Ren, Chenjun Xiao et al.