2025 "representation learning" Papers

22 papers found

$\mathbb{X}$-Sample Contrastive Loss: Improving Contrastive Learning with Sample Similarity Graphs

Vlad Sobal, Mark Ibrahim, Randall Balestriero et al.

ICLR 2025posterarXiv:2407.18134
12
citations

AmorLIP: Efficient Language-Image Pretraining via Amortization

Haotian Sun, Yitong Li, Yuchen Zhuang et al.

NeurIPS 2025posterarXiv:2505.18983
2
citations

A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics

Licong Lin, Song Mei

NeurIPS 2025posterarXiv:2503.17538
3
citations

A Unifying Framework for Representation Learning

Shaden Alshammari, John Hershey, Axel Feldmann et al.

ICLR 2025posterarXiv:2504.16929
12
citations

Boosting Multiple Views for pretrained-based Continual Learning

Quyen Tran, Tung Lam Tran, Khanh Doan et al.

ICLR 2025poster
4
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NeurIPS 2025posterarXiv:2509.17552
1
citations

Deep Kernel Posterior Learning under Infinite Variance Prior Weights

Jorge Loría, Anindya Bhadra

ICLR 2025posterarXiv:2410.01284

Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax

Ivan Butakov, Alexander Semenenko, Alexander Tolmachev et al.

ICLR 2025posterarXiv:2410.06993
2
citations

GeSubNet: Gene Interaction Inference for Disease Subtype Network Generation

Ziwei Yang, Zheng Chen, XIN LIU et al.

ICLR 2025posterarXiv:2410.13178
2
citations

Harnessing Feature Resonance under Arbitrary Target Alignment for Out-of-Distribution Node Detection

Shenzhi Yang, Junbo Zhao, Sharon Li et al.

NeurIPS 2025posterarXiv:2502.16076

How Classifier Features Transfer to Downstream: An Asymptotic Analysis in a Two-Layer Model

HEE BIN YOO, Sungyoon Lee, Cheongjae Jang et al.

NeurIPS 2025poster

How Far Are We from True Unlearnability?

Kai Ye, Liangcai Su, Chenxiong Qian

ICLR 2025posterarXiv:2509.08058
4
citations

OGBench: Benchmarking Offline Goal-Conditioned RL

Seohong Park, Kevin Frans, Benjamin Eysenbach et al.

ICLR 2025posterarXiv:2410.20092
74
citations

OLinear: A Linear Model for Time Series Forecasting in Orthogonally Transformed Domain

Wenzhen Yue, Yong Liu, Hao Wang et al.

NeurIPS 2025oralarXiv:2505.08550
9
citations

On the creation of narrow AI: hierarchy and nonlocality of neural network skills

Eric Michaud, Asher Parker-Sartori, Max Tegmark

NeurIPS 2025posterarXiv:2505.15811
2
citations

On the Feature Learning in Diffusion Models

Andi Han, Wei Huang, Yuan Cao et al.

ICLR 2025posterarXiv:2412.01021
13
citations

Rotary Masked Autoencoders are Versatile Learners

Uros Zivanovic, Serafina Di Gioia, Andre Scaffidi et al.

NeurIPS 2025posterarXiv:2505.20535
1
citations

Self-Supervised Contrastive Learning is Approximately Supervised Contrastive Learning

Achleshwar Luthra, Tianbao Yang, Tomer Galanti

NeurIPS 2025posterarXiv:2506.04411
1
citations

Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models

Young Kyun Jang, Ser-Nam Lim

ICCV 2025posterarXiv:2405.14715
2
citations

T-REGS: Minimum Spanning Tree Regularization for Self-Supervised Learning

Julie Mordacq, David Loiseaux, Vicky Kalogeiton et al.

NeurIPS 2025spotlightarXiv:2510.23484

USP: Unified Self-Supervised Pretraining for Image Generation and Understanding

Xiangxiang Chu, Renda Li, Yong Wang

ICCV 2025posterarXiv:2503.06132
17
citations

Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models

Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.

NeurIPS 2025posterarXiv:2507.07104
2
citations