by Natalie Mackraz Papers
3 papers found
Aligning LLMs by Predicting Preferences from User Writing Samples
Stéphane Aroca-Ouellette, Natalie Mackraz, Barry-John Theobald et al.
ICML 2025posterarXiv:2505.23815
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
Yinong O Wang, Nivedha Sivakumar, Falaah Arif Khan et al.
ICML 2025posterarXiv:2505.23996
Large Language Models as Generalizable Policies for Embodied Tasks
Andrew Szot, Max Schwarzer, Harsh Agrawal et al.
ICLR 2024posterarXiv:2310.17722