"preference modeling" Papers
7 papers found
Beyond Scalar Rewards: An Axiomatic Framework for Lexicographic MDPs
Mehran Shakerinava, Siamak Ravanbakhsh, Adam Oberman
NeurIPS 2025spotlightarXiv:2505.12049
Efficient and Near-Optimal Algorithm for Contextual Dueling Bandits with Offline Regression Oracles
Aadirupa Saha, Robert Schapire
NeurIPS 2025poster
Logic-Logit: A Logic-Based Approach to Choice Modeling
Shuhan Zhang, Wendi Ren, Shuang Li
ICLR 2025poster
Pairwise Calibrated Rewards for Pluralistic Alignment
Daniel Halpern, Evi Micha, Ariel Procaccia et al.
NeurIPS 2025posterarXiv:2506.06298
Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Ángela López-Cardona, Carlos Segura, Alexandros Karatzoglou et al.
ICLR 2025posterarXiv:2410.01532
8
citations
Nash Learning from Human Feedback
REMI MUNOS, Michal Valko, Daniele Calandriello et al.
ICML 2024spotlight
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input
Andi Peng, Yuying Sun, Tianmin Shu et al.
ICML 2024oral