Poster "backdoor attacks" Papers

17 papers found

Attack by Yourself: Effective and Unnoticeable Multi-Category Graph Backdoor Attacks with Subgraph Triggers Pool

Jiangtong Li, Dongyi Liu, Kun Zhu et al.

NeurIPS 2025posterarXiv:2412.17213
2
citations

BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization

Xueyang Zhou, Guiyao Tie, Guowen Zhang et al.

NeurIPS 2025posterarXiv:2505.16640
11
citations

Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks

Bowei He, Lihao Yin, Huiling Zhen et al.

ICLR 2025posterarXiv:2502.06892
3
citations

DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders

Sizai Hou, Songze Li, Duanyi Yao

CVPR 2025posterarXiv:2411.16154

FedRACE: A Hierarchical and Statistical Framework for Robust Federated Learning

Gang Yan, Sikai Yang, Wan Du

NeurIPS 2025poster

Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning

Ye Li, Yanchao Zhao, chengcheng zhu et al.

CVPR 2025posterarXiv:2409.19601
2
citations

SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation

He Yang, Dongyi Lv, Song Ma et al.

NeurIPS 2025poster

Where the Devil Hides: Deepfake Detectors Can No Longer Be Trusted

Shuaiwei Yuan, Junyu Dong, Yuezun Li

CVPR 2025posterarXiv:2505.08255
2
citations

Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks

Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman

ICML 2024poster

Causality Based Front-door Defense Against Backdoor Attack on Language Models

Yiran Liu, Xiaoang Xu, Zhiyi Hou et al.

ICML 2024poster

Defense against Backdoor Attack on Pre-trained Language Models via Head Pruning and Attention Normalization

Xingyi Zhao, Depeng Xu, Shuhan Yuan

ICML 2024poster

Flatness-aware Sequential Learning Generates Resilient Backdoors

Hoang Pham, The-Anh Ta, Anh Tran et al.

ECCV 2024posterarXiv:2407.14738
1
citations

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

Linshan Hou, Ruili Feng, Zhongyun Hua et al.

ICML 2024poster

SHINE: Shielding Backdoors in Deep Reinforcement Learning

Zhuowen Yuan, Wenbo Guo, Jinyuan Jia et al.

ICML 2024poster

TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors

Yichuan Mo, Hui Huang, Mingjie Li et al.

ICML 2024poster

TrojVLM: Backdoor Attack Against Vision Language Models

Weimin Lyu, Lu Pang, Tengfei Ma et al.

ECCV 2024posterarXiv:2409.19232
23
citations

WBP: Training-time Backdoor Attacks through Hardware-based Weight Bit Poisoning

Kunbei Cai, Zhenkai Zhang, Qian Lou et al.

ECCV 2024poster