Poster "reinforcement learning alignment" Papers
3 papers found
Measuring And Improving Engagement of Text-to-Image Generation Models
Varun Khurana, Yaman Singla, Jayakumar Subramanian et al.
ICLR 2025poster
2
citations
PurpCode: Reasoning for Safer Code Generation
Jiawei Liu, Nirav Diwan, Zhe Wang et al.
NeurIPS 2025posterarXiv:2507.19060
7
citations
Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning
Zongmeng Zhang, Yufeng Shi, Jinhua Zhu et al.
ICML 2024poster