UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

0citations
PDFProject
0
Citations
#1150
in ECCV 2024
of 2387 papers
8
Authors
3
Data Points

Abstract

Deep neural networks (DNNs) have demonstrated effectiveness in various fields. However, DNNs are vulnerable to backdoor attacks, which inject a unique pattern, called trigger, in the input to cause misclassification to an attack-chosen target label. While existing works have proposed various methods to mitigate backdoor effects in poisoned models, they tend to be less effective against recent advanced attacks. In this paper, we introduce a novel post-training defense technique UNIT that can effectively remove backdoors for a variety of attacks. In specific, UNIT approximates a unique and tight activation distribution for each neuron in the model. It then proactively dispels substantially large activation values that exceed the approximated boundaries. Our experimental results demonstrate that UNIT outperforms 9 popular defense methods against 14 existing backdoor attacks, including 2 advanced attacks, using only 5\% of clean training data.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0