Optimizing Adaptive Attacks against Watermarks for Language Models

0citations
Project
0
Citations
#766
in ICML 2025
of 3340 papers
3
Authors
1
Data Points

Abstract

Large Language Models (LLMs) can be misused to spread unwanted content at scale. Content watermarking deters misuse by hiding messages in content, enabling its detection using a secretwatermarking key. Robustness is a core security property, stating that evading detection requires (significant) degradation of the content's quality. Many LLM watermarking methods have been proposed, but robustness is tested only againstnon-adaptiveattackers who lack knowledge of the watermarking method and can find only suboptimal attacks. We formulate watermark robustness as an objective function and use preference-based optimization to tuneadaptiveattacks against the specific watermarking method. Our evaluation shows that (i) adaptive attacks evade detection against all surveyed watermarks, (ii) training againstanywatermark succeeds in evading unseen watermarks, and (iii) optimization-based attacks are cost-effective. Our findings underscore the need to test robustness against adaptively tuned attacks. We release our adaptively tuned paraphrasers athttps://github.com/nilslukas/ada-wm-evasion.

Citation History

Jan 28, 2026
0