SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-Tuning

0citations
0
Citations
#766
in ICML 2025
of 3340 papers
10
Authors
1
Data Points

Abstract

Advances in Parameter-efficient Fine-tuning (PEFT) bridged the performance gap with Full Fine-Tuning (FFT) through sophisticated analysis of pre-trained parameter spaces. Starting from drawing insights from Neural Engrams (NE) in Biological Neural Networks (BNNs), we establish a connection between the low-rank property observed during PEFT's parameter space shifting and neurobiological mechanisms. This observation leads to our proposed method,Synapse andNeuron (SAN), which decomposes and propagates the scaling component from anterior feature adjustment vectors towards posterior weight matrices. Our approach is theoretically grounded in Long-Term Potentiation/Depression (LTP/D) phenomena, which govern synapse development through neurotransmitter release modulation. Extensive experiments demonstrate its effectiveness: onvision tasksacross VTAB, FGVC, and GIC (25 datasets) using ViT, Swin-T and ConvNeXt architectures, SAN outperforms FFT up to8.7%and LoRA by3.2%; onlanguage tasksusing Commonsense Reasoning (8 datasets) with LLaMA models (all generations), surpassing ChatGPT up to8.5%and LoRA by4.7%; onvision-language tasksusing Visual Instruction Tuning (7 datasets) with LLaVA models, it exceeds FFT up to2.4%and LoRA by1.9%. Our code and W&B log will be released

Citation History

Jan 28, 2026
0