Weight matrices compression based on PDB model in deep neural networks

0citations
0
Citations
#766
in ICML 2025
of 3340 papers
3
Authors
1
Data Points

Abstract

Weight matrix compression has been demonstrated to effectively reduce overfitting and improve the generalization performance of deep neural networks. Compression is primarily achieved by filtering out noisy eigenvalues of the weight matrix. In this work, a novelPopulation Double Bulk (PDB) modelis proposed to characterize the eigenvalue behavior of the weight matrix, which is more general than the existing Population Unit Bulk (PUB) model. Based on PDB model and Random Matrix Theory (RMT), we have discovered a newPDBLS algorithmfor determining the boundary between noisy eigenvalues and information. APDB Noise-Filtering algorithmis further introduced to reduce the rank of the weight matrix for compression. Experiments show that our PDB model fits the empirical distribution of eigenvalues of the weight matrix better than the PUB model, and our compressed weight matrices have lower rank at the same level of test accuracy. In some cases, our compression method can even improve generalization performance when labels contain noise. The code is avaliable at https://github.com/xlwu571/PDBLS.

Citation History

Jan 27, 2026
0