Poster "model stealing attacks" Papers
2 papers found
CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment
Qinfeng Li, Tianyue Luo, Xuhong Zhang et al.
NeurIPS 2025posterarXiv:2410.13903
7
citations
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten
Chenxu Zhao, Wei Qian, Yangyi Li et al.
ICML 2024poster