Disentangled Representations
Learning factorized representations
Related Topics (Representation Learning)
Top Papers
EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
Yi-Lun Liao, Brandon Wood, Abhishek Das et al.
Linearity of Relation Decoding in Transformer Language Models
Evan Hernandez, Arnab Sen Sharma, Tal Haklay et al.
Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining
Xiang Chen, Jinshan Pan, Jiangxin Dong
Frequency-Spatial Entanglement Learning for Camouflaged Object Detection
Yanguang Sun, Chunyan Xu, Jian Yang et al.
FINER: Flexible Spectral-bias Tuning in Implicit NEural Representation by Variable-periodic Activation Functions
Zhen Liu, Hao Zhu, Qi Zhang et al.
Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders
Yaohua Zha, Huizhen Ji, Jinmin Li et al.
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks
Marc Rußwurm, Konstantin Klemmer, Esther Rolf et al.
Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion
Kiran Chhatre, Radek Danecek, Nikos Athanasiou et al.
Sonata: Self-Supervised Learning of Reliable Point Representations
Xiaoyang Wu, Daniel DeTone, Duncan Frost et al.
Scaling Language-Free Visual Representation Learning
David Fan, Shengbang Tong, Jiachen Zhu et al.
Disentangled Prompt Representation for Domain Generalization
De Cheng, Zhipeng Xu, XINYANG JIANG et al.
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei, Niladri Chatterji, Peter L. Bartlett
Rethinking Generalizable Face Anti-spoofing via Hierarchical Prototype-guided Distribution Refinement in Hyperbolic Space
Chengyang Hu, Ke-Yue Zhang, Taiping Yao et al.
Dataset Distillation with Neural Characteristic Function: A Minmax Perspective
Shaobo Wang, Yicun Yang, Zhiyuan Liu et al.
DTL: Disentangled Transfer Learning for Visual Recognition
Minghao Fu, Ke Zhu, Jianxin Wu
Exploring Unbiased Deepfake Detection via Token-Level Shuffling and Mixing
Xinghe Fu, Zhiyuan Yan, Taiping Yao et al.
Learning Equi-angular Representations for Online Continual Learning
Minhyuk Seo, Hyunseo Koh, Wonje Jeung et al.
FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning
Chenhao Li, Elijah Stanger-Jones, Steve Heim et al.
Rethinking Multi-view Representation Learning via Distilled Disentangling
Guanzhou Ke, Bo Wang, Xiao-Li Wang et al.
Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement Tasks
Ben Eisner, Yi Yang, Todor Davchev et al.
On the Provable Advantage of Unsupervised Pretraining
Jiawei Ge, Shange Tang, Jianqing Fan et al.
FoldToken: Learning Protein Language via Vector Quantization and Beyond
Zhangyang Gao, Cheng Tan, Jue Wang et al.
Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation
Valentyn Melnychuk, Dennis Frauen, Stefan Feuerriegel
Leveraging Cross-Modal Neighbor Representation for Improved CLIP Classification
Chao Yi, Lu Ren, De-Chuan Zhan et al.
TIME-FS: Joint Learning of Tensorial Incomplete Multi-View Unsupervised Feature Selection and Missing-View Imputation
Yanyong Huang, Minghui Lu, Wei Huang et al.
Gaze-LLE: Gaze Target Estimation via Large-Scale Learned Encoders
Fiona Ryan, Ajay Bati, Sangmin Lee et al.
Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach
Zhiwei Li, Guodong Long, Tianyi Zhou et al.
Adaptive Length Image Tokenization via Recurrent Allocation
Shivam Duggal, Phillip Isola, Antonio Torralba et al.
Scalable Image Tokenization with Index Backpropagation Quantization
Fengyuan Shi, Zhuoyan Luo, Yixiao Ge et al.
Efficient Learning with Sine-Activated Low-Rank Matrices
Yiping Ji, Hemanth Saratchandran, Cameron Gordon et al.
Hybrid Proposal Refiner: Revisiting DETR Series from the Faster R-CNN Perspective
Jinjing Zhao, Fangyun Wei, Chang Xu
Region-Based Representations Revisited
Michal Shlapentokh-Rothman, Ansel Blume, Yao Xiao et al.
Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning
Patrik Reizinger, Siyuan Guo, Ferenc Huszar et al.
SketchINR: A First Look into Sketches as Implicit Neural Representations
Hmrishav Bandyopadhyay, Ayan Kumar Bhunia, Pinaki Nath Chowdhury et al.
Closed-Loop Unsupervised Representation Disentanglement with $\beta$-VAE Distillation and Diffusion Probabilistic Feedback
Xin Jin, Bohan Li, Baao Xie et al.
A Unifying Framework for Representation Learning
Shaden Alshammari, John Hershey, Axel Feldmann et al.
Self-Supervised Disentangled Representation Learning for Robust Target Speech Extraction
Zhaoxi Mu, Xinyu Yang, Sining Sun et al.
Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning
Tim Lenz, Peter Neidlinger, Marta Ligero et al.
ConcaveQ: Non-monotonic Value Function Factorization via Concave Representations in Deep Multi-Agent Reinforcement Learning
Huiqun Li, Hanhan Zhou, Yifei Zou et al.
Interaction Asymmetry: A General Principle for Learning Composable Abstractions
Jack Brady, Julius von Kügelgen, Sebastien Lachapelle et al.
Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation
Kihong Kim, Haneol Lee, Jihye Park et al.
VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers
Juncan Deng, Shuaiting Li, Zeyu Wang et al.
FedLF: Layer-Wise Fair Federated Learning
Zibin Pan, Chi Li, Fangchen Yu et al.
Colour Passing Revisited: Lifted Model Construction with Commutative Factors
Malte Luttermann, Tanya Braun, Ralf Möller et al.
Factorized Diffusion Autoencoder for Unsupervised Disentangled Representation Learning
Ancong Wu, Wei-shi Zheng
Memory-Scalable and Simplified Functional Map Learning
Robin Magnet, Maks Ovsjanikov
Plastic Learning with Deep Fourier Features
Alex Lewandowski, Dale Schuurmans, Marlos C. Machado
TACIT: A Target-Agnostic Feature Disentanglement Framework for Cross-Domain Text Classification
Rui Song, Fausto Giunchiglia, Yingji Li et al.
Link Prediction in Multilayer Networks via Cross-Network Embedding
Guojing Ren, Xiao Ding, Xiao-Ke Xu et al.
Deep Nonlinear Sufficient Dimension Reduction
Yinfeng Chen, Yuling Jiao, Rui Qiu et al.
The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
Denis Sutter, Julian Minder, Thomas Hofmann et al.
Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning
Gangwei Jiang, caigao jiang, Zhaoyi Li et al.
Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networks
Lukas Braun, Erin Grant, Andrew Saxe
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation
Yangyang Guo, Guangzhi Wang, Mohan Kankanhalli
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb, Tobias Weber, Bernd Bischl et al.
Synthetic Prior for Few-Shot Drivable Head Avatar Inversion
Wojciech Zielonka, Stephan J. Garbin, Alexandros Lattas et al.
REPA Works Until It Doesn’t: Early-Stopped, Holistic Alignment Supercharges Diffusion Training
Ziqiao Wang, Wangbo Zhao, Yuhao Zhou et al.
Towards the Disappearing Truth: Fine-Grained Joint Causal Influences Learning with Hidden Variable-Driven Causal Hypergraphs
Kun Zhu, Chunhui Zhao
FreSh: Frequency Shifting for Accelerated Neural Representation Learning
Adam Kania, Marko Mihajlovic, Sergey Prokudin et al.
Fast Encoding and Decoding for Implicit Video Representation
Hao Chen, Saining Xie, Ser-Nam Lim et al.
UNR-Explainer: Counterfactual Explanations for Unsupervised Node Representation Learning Models
Hyunju Kang, Geonhee Han, Hogun Park
Capture Global Feature Statistics for One-Shot Federated Learning
Zenghao Guan, Yucan Zhou, Xiaoyan Gu
Overcoming the Curse of Dimensionality in Reinforcement Learning Through Approximate Factorization
Chenbei Lu, Laixi Shi, Zaiwei Chen et al.
Erase Then Rectify: A Training-Free Parameter Editing Approach for Cost-Effective Graph Unlearning
Zhe-Rui Yang, Jindong Han, Chang-Dong Wang et al.
Implicit Neural Representations and the Algebra of Complex Wavelets
T Mitchell Roddenberry, Vishwanath Saragadam, Maarten V de Hoop et al.
DRL: Decomposed Representation Learning for Tabular Anomaly Detection
Hangting Ye, He Zhao, Wei Fan et al.
LDP: Generalizing to Multilingual Visual Information Extraction by Language Decoupled Pretraining
Huawen Shen, Gengluo Li, Jinwen Zhong et al.
Efficient Multitask Dense Predictor via Binarization
Yuzhang Shang, Dan Xu, Gaowen Liu et al.
Combining Frame and GOP Embeddings for Neural Video Representation
Jens Eirik Saethre, Roberto Azevedo, Christopher Schroers
Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations
Lorenzo Basile, Santiago Acevedo, Luca Bortolussi et al.
Diffusion Bridge AutoEncoders for Unsupervised Representation Learning
Yeongmin Kim, Kwanghyeon Lee, Minsang Park et al.
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Ziming Yu, Pan Zhou, Sike Wang et al.
Enhancing Zero-Shot Multi-Speaker TTS with Negated Speaker Representations
Yejin Jeon, Yunsu Kim, Gary Geunbae Lee
Generalized Dimension Reduction Using Semi-Relaxed Gromov-Wasserstein Distance
Ranthony A. Clark, Tom Needham, Thomas Weighill
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quetu, Enzo Tartaglione
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection
Jinqi Xiao, Shen Sang, Tiancheng Zhi et al.
FactorGCL: A Hypergraph-Based Factor Model with Temporal Residual Contrastive Learning for Stock Returns Prediction
Yitong Duan, Weiran Wang, Jian Li
Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization
Vladimir Boza, Vladimir Macko
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini, Adel Javanmard, Murat A Erdogdu
Towards a Theoretical Understanding of Why Local Search Works for Clustering with Fair-Center Representation
Zhen Zhang, Junfeng Yang, Limei Liu et al.
An Intuitive Multi-Frequency Feature Representation for SO(3)-Equivariant Networks
Dongwon Son, Jaehyung Kim, Sanghyeon Son et al.
Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space
Yufei Gu, Xiaoqing Zheng, Tomaso Aste
DCT-CryptoNets: Scaling Private Inference in the Frequency Domain
Arjun Roy, Kaushik Roy
Scaling Convex Neural Networks with Burer-Monteiro Factorization
Arda Sahiner, Tolga Ergen, Batu Ozturkler et al.
Optimal Spectral Transitions in High-Dimensional Multi-Index Models
Leonardo Defilippis, Yatin Dandi, Pierre Mergny et al.
Small Singular Values Matter: A Random Matrix Analysis of Transformer Models
Max Staats, Matthias Thamm, Bernd Rosenow
Towards Learnable Anchor for Deep Multi-View Clustering
Bocheng Wang, Chusheng Zeng, Mulin Chen et al.
Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations
Chaofan Gan, Yuanpeng Tu, Xi Chen et al.
Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning
Qi Wang, Zhipeng Zhang, Baao Xie et al.
Robust Machine Unlearning for Quantized Neural Networks via Adaptive Gradient Reweighting with Similar Labels
Yujia Tong, Yuze Wang, Jingling Yuan et al.
Representations Shape Weak-to-Strong Generalization: Theoretical Insights and Empirical Predictions
Yihao Xue, Jiping Li, Baharan Mirzasoleiman
Disentangling Representations through Multi-task Learning
Pantelis Vafidis, Aman Bhargava, Antonio Rangel
Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation
Jie Xu, Na Zhao, Gang Niu et al.
Synergy Between Sufficient Changes and Sparse Mixing Procedure for Disentangled Representation Learning
Zijian Li, Shunxing Fan, Yujia Zheng et al.
Geometric Hyena Networks for Large-scale Equivariant Learning
Artem Moskalev, Mangal Prakash, Junjie Xu et al.
In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Constantin Philippenko, Kevin Scaman, Laurent Massoulié
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis, Richard Klein, Benjamin Rosman et al.
Neural Spacetimes for DAG Representation Learning
Haitz Sáez de Ocáriz Borde, Anastasis Kratsios, Marc T Law et al.
Alleviating Performance Disparity in Adversarial Spatiotemporal Graph Learning Under Zero-Inflated Distribution
Songran Bai, Yuheng Ji, Yue Liu et al.
Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation Learning
En Fu, Yanyan Hu