ECCV Poster "adversarial attacks" Papers
9 papers found
Adversarial Prompt Tuning for Vision-Language Models
Jiaming Zhang, Xingjun Ma, Xin Wang et al.
ECCV 2024posterarXiv:2311.11261
34
citations
A Secure Image Watermarking Framework with Statistical Guarantees via Adversarial Attacks on Secret Key Networks
Feiyu CHEN, Wei Lin, Ziquan Liu et al.
ECCV 2024poster
1
citations
Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models
Vitali Petsiuk, Kate Saenko
ECCV 2024posterarXiv:2404.13706
8
citations
Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
Yanmeng Yao, Xiaohan Zhao, Bin Gu
ECCV 2024poster
9
citations
MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models
Xin Liu, Yichen Zhu, Jindong Gu et al.
ECCV 2024posterarXiv:2311.17600
183
citations
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng, Hadi Amiri
ECCV 2024posterarXiv:2311.12047
13
citations
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer, Yury Belousov, Slava Voloshynovskiy
ECCV 2024posterarXiv:2503.10191
Shedding More Light on Robust Classifiers under the lens of Energy-based Models
Mujtaba Hussain Mirza, Maria Rosaria Briglia, Senad Beadini et al.
ECCV 2024posterarXiv:2407.06315
7
citations
SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization
Xixu Hu, Runkai Zheng, Jindong Wang et al.
ECCV 2024posterarXiv:2402.03317
5
citations