Most Cited ICLR Poster by Maxwell Lin Papers
3 papers found
Conference
#1
AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents
Maksym Andriushchenko, Alexandra Souly, Mateusz Dziemian et al.
ICLR 2025arXiv:2410.09024
127
citations
#2
Tamper-Resistant Safeguards for Open-Weight LLMs
Rishub Tamirisa, Bhrugu Bharathi, Long Phan et al.
ICLR 2025arXiv:2408.00761
108
citations
#3
Teaching Large Language Models to Self-Debug
Xinyun Chen, Maxwell Lin, Nathanael Schaerli et al.
ICLR 2024arXiv:2304.05128