"adversarial robustness" Papers
62 papers found • Page 1 of 2
$\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples
Antonio Emanuele Cinà, Francesco Villani, Maura Pintor et al.
Alias-Free ViT: Fractional Shift Invariance via Linear Attention
Hagay Michaeli, Daniel Soudry
Attack by Yourself: Effective and Unnoticeable Multi-Category Graph Backdoor Attacks with Subgraph Triggers Pool
Jiangtong Li, Dongyi Liu, Kun Zhu et al.
Bridging Symmetry and Robustness: On the Role of Equivariance in Enhancing Adversarial Robustness
Longwei Wang, Ifrat Ikhtear Uddin, Prof. KC Santosh (PhD) et al.
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
Sizai Hou, Songze Li, Duanyi Yao
Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks
Steffen Schotthöfer, Lexie Yang, Stefan Schnake
Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks
Binghui Li, Zhixuan Pan, Kaifeng Lyu et al.
Improving Generalization and Robustness in SNNs Through Signed Rate Encoding and Sparse Encoding Attacks
Bhaskar Mukhoty, Hilal AlQuabeh, Bin Gu
MUNBa: Machine Unlearning via Nash Bargaining
Jing Wu, Mehrtash Harandi
PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies
Mojtaba Nafez, Amirhossein Koochakian, Arad Maleki et al.
Resolution Attack: Exploiting Image Compression to Deceive Deep Neural Networks
Wangjia Yu, Xiaomeng Fu, Qiao Li et al.
Robust Conformal Prediction with a Single Binary Certificate
Soroush H. Zargarbashi, Aleksandar Bojchevski
Support is All You Need for Certified VAE Training
Changming Xu, Debangshu Banerjee, Deepak Vasisht et al.
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing
Alaa Anani, Tobias Lorenz, Bernt Schiele et al.
Adversarial Attacks on Combinatorial Multi-Armed Bandits
Rishab Balasubramanian, Jiawei Li, Tadepalli Prasad et al.
Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework
Haonan Huang, Guoxu Zhou, Yanghang Zheng et al.
Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap
Junhao Dong, Piotr Koniusz, Junxi Chen et al.
Adversarially Robust Hypothesis Transfer Learning
Yunjuan Wang, Raman Arora
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson, James Diffenderfer, Konstantinos Parasyris et al.
Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data
Yujun Zhou, Yufei Han, Haomin Zhuang et al.
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Zhiyuan Cheng, Zhaoyi Liu, Tengda Guo et al.
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman
Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning
Zhiyuan He, Yijun Yang, Pin-Yu Chen et al.
Breaking the Barrier: Enhanced Utility and Robustness in Smoothed DRL Agents
Chung-En Sun, Sicun Gao, Lily Weng
Can Implicit Bias Imply Adversarial Robustness?
Hancheng Min, Rene Vidal
Catastrophic Overfitting: A Potential Blessing in Disguise
MN Zhao, Lihe Zhang, Yuqiu Kong et al.
Causality Based Front-door Defense Against Backdoor Attack on Language Models
Yiran Liu, Xiaoang Xu, Zhiyi Hou et al.
Characterizing Model Robustness via Natural Input Gradients
Adrian Rodriguez-Munoz, Tongzhou Wang, Antonio Torralba
Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval
Qiwei Tian, Chenhao Lin, Zhengyu Zhao et al.
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks
Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin et al.
Compositional Curvature Bounds for Deep Neural Networks
Taha Entesari, Sina Sharifi, Mahyar Fazlyab
Consistent Adversarially Robust Linear Classification: Non-Parametric Setting
Elvis Dohmatob
Coupling Graph Neural Networks with Fractional Order Continuous Dynamics: A Robustness Study
Qiyu Kang, Kai Zhao, Yang Song et al.
DataFreeShield: Defending Adversarial Attacks without Training Data
Hyeyoon Lee, Kanghyun Choi, Dain Kwon et al.
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Yujia Liu, Tong Bu, Ding Jianhao et al.
Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples
Andrew C. Cullen, Shijie Liu, Paul Montague et al.
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Jon Vadillo, Roberto Santana, Jose A Lozano
Geometry-Aware Instrumental Variable Regression
Heiner Kremer, Bernhard Schölkopf
Graph Adversarial Diffusion Convolution
Songtao Liu, Jinghui Chen, Tianfan Fu et al.
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu, Yixin Liu, Ninghao Liu et al.
LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate
Tao Wu, Tie Luo, D. C. Wunsch
Lyapunov-Stable Deep Equilibrium Models
Haoyu Chu, Shikui Wei, Ting Liu et al.
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Yihao Zhang, Hangzhou He, Jingyu Zhu et al.
OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift
Lin Li, Yifei Wang, Chawin Sitawarin et al.
Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off
Yuansan Liu, Ruqing Zhang, Mingkun Zhang et al.
Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms
Elvis Dohmatob, Meyer Scetbon
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten
Chenxu Zhao, Wei Qian, Yangyi Li et al.
Rethinking Fast Adversarial Training: A Splitting Technique To Overcome Catastrophic Overfitting
Masoumeh Zareapoor, Pourya Shamsolmoali
Robust Classification via a Single Diffusion Model
Huanran Chen, Yinpeng Dong, Zhengyi Wang et al.
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer, Yury Belousov, Slava Voloshynovskiy