2025 Poster "representation learning" Papers
29 papers found
$\mathbb{X}$-Sample Contrastive Loss: Improving Contrastive Learning with Sample Similarity Graphs
Vlad Sobal, Mark Ibrahim, Randall Balestriero et al.
AmorLIP: Efficient Language-Image Pretraining via Amortization
Haotian Sun, Yitong Li, Yuchen Zhuang et al.
A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics
Licong Lin, Song Mei
A Theoretical Analysis of Self-Supervised Learning for Vision Transformers
Yu Huang, Zixin Wen, Yuejie Chi et al.
A Unifying Framework for Representation Learning
Shaden Alshammari, John Hershey, Axel Feldmann et al.
Boosting Multiple Views for pretrained-based Continual Learning
Quyen Tran, Tung Lam Tran, Khanh Doan et al.
Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
Tianle Zhang, Wanlong Fang, Jonathan Woo et al.
Deep Kernel Posterior Learning under Infinite Variance Prior Weights
Jorge Loría, Anindya Bhadra
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
Ivan Butakov, Alexander Semenenko, Alexander Tolmachev et al.
GeSubNet: Gene Interaction Inference for Disease Subtype Network Generation
Ziwei Yang, Zheng Chen, XIN LIU et al.
Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions
Rui Qiao, Zhaoxuan Wu, Jingtan Wang et al.
Harnessing Feature Resonance under Arbitrary Target Alignment for Out-of-Distribution Node Detection
Shenzhi Yang, Junbo Zhao, Sharon Li et al.
How Classifier Features Transfer to Downstream: An Asymptotic Analysis in a Two-Layer Model
HEE BIN YOO, Sungyoon Lee, Cheongjae Jang et al.
How Far Are We from True Unlearnability?
Kai Ye, Liangcai Su, Chenxiong Qian
Improving Deep Regression with Tightness
Shihao Zhang, Yuguang Yan, Angela Yao
Learning a Fast Mixing Exogenous Block MDP using a Single Trajectory
Alexander Levine, Peter Stone, Amy Zhang
Minimal Semantic Sufficiency Meets Unsupervised Domain Generalization
Tan Pan, Kaiyu Guo, Dongli Xu et al.
Neural Thermodynamics: Entropic Forces in Deep and Universal Representation Learning
Liu Ziyin, Yizhou Xu, Isaac Chuang
OGBench: Benchmarking Offline Goal-Conditioned RL
Seohong Park, Kevin Frans, Benjamin Eysenbach et al.
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric Michaud, Asher Parker-Sartori, Max Tegmark
On the Feature Learning in Diffusion Models
Andi Han, Wei Huang, Yuan Cao et al.
Provable Meta-Learning with Low-Rank Adaptations
Jacob Block, Sundararajan Srinivasan, Liam Collins et al.
Rotary Masked Autoencoders are Versatile Learners
Uros Zivanovic, Serafina Di Gioia, Andre Scaffidi et al.
Self-Supervised Contrastive Learning is Approximately Supervised Contrastive Learning
Achleshwar Luthra, Tianbao Yang, Tomer Galanti
Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning
Samuel Garcin, Trevor McInroe, Pablo Samuel Castro et al.
Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models
Young Kyun Jang, Ser-Nam Lim
Understanding Representation Dynamics of Diffusion Models via Low-Dimensional Modeling
Xiao Li, Zekai Zhang, Xiang Li et al.
USP: Unified Self-Supervised Pretraining for Image Generation and Understanding
Xiangxiang Chu, Renda Li, Yong Wang
Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models
Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.