2025 "harmful content generation" Papers
4 papers found
Durable Quantization Conditioned Misalignment Attack on Large Language Models
Peiran Dong, Haowei Li, Song Guo
ICLR 2025poster
1
citations
Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them
Anh Bui, Thuy-Trang Vu, Long Vuong et al.
ICLR 2025posterarXiv:2501.18950
Information Retrieval Induced Safety Degradation in AI Agents
Cheng Yu, Benedikt Stroebl, Diyi Yang et al.
NeurIPS 2025posterarXiv:2505.14215
One Head to Rule Them All: Amplifying LVLM Safety through a Single Critical Attention Head
Junhao Xia, Haotian Zhu, Shuchao Pang et al.
NeurIPS 2025poster