2024 "policy optimization" Papers

28 papers found

Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning

Yen-Ju Chen, Nai-Chieh Huang, Ching-pei Lee et al.

ICML 2024poster

Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation Strategies towards Equal Long-term Benefit Rate

Yuancheng Xu, Chenghao Deng, Yanchao Sun et al.

ICML 2024oral

Adaptive-Gradient Policy Optimization: Enhancing Policy Learning in Non-Smooth Differentiable Simulations

Feng Gao, Liangzhi Shi, Shenao Zhang et al.

ICML 2024poster

Bayesian Design Principles for Offline-to-Online Reinforcement Learning

Hao Hu, yiqin yang, Jianing Ye et al.

ICML 2024poster

Constrained Reinforcement Learning Under Model Mismatch

Zhongchang Sun, Sihong He, Fei Miao et al.

ICML 2024poster

Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization

Gergely Neu, Nneka Okolo

ICML 2024poster

Degeneration-free Policy Optimization: RL Fine-Tuning for Language Models without Degeneration

Youngsoo Jang, Geon-Hyeong Kim, Byoungjip Kim et al.

ICML 2024poster

DGPO: Discovering Multiple Strategies with Diversity-Guided Policy Optimization

Wenze Chen, Shiyu Huang, Yuan Chiang et al.

AAAI 2024paperarXiv:2207.05631
9
citations

EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search

Pengyi Li, Yan Zheng, Hongyao Tang et al.

ICML 2024poster

Exploration-Driven Policy Optimization in RLHF: Theoretical Insights on Efficient Data Utilization

Yihan Du, Anna Winnicki, Gal Dalal et al.

ICML 2024poster

Improving Instruction Following in Language Models through Proxy-Based Uncertainty Estimation

JoonHo Lee, Jae Oh Woo, Juree Seok et al.

ICML 2024poster

Information-Directed Pessimism for Offline Reinforcement Learning

Alec Koppel, Sujay Bhatt, Jiacheng Guo et al.

ICML 2024poster

Iterative Regularized Policy Optimization with Imperfect Demonstrations

Xudong Gong, Feng Dawei, Kele Xu et al.

ICML 2024poster

Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback

songyang gao, Qiming Ge, Wei Shen et al.

ICML 2024poster

Model-based Reinforcement Learning for Confounded POMDPs

Mao Hong, Zhengling Qi, Yanxun Xu

ICML 2024poster

Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback

Asaf Cassel, Haipeng Luo, Aviv Rosenberg et al.

ICML 2024poster

Optimistic Model Rollouts for Pessimistic Offline Policy Optimization

Yuanzhao Zhai, Yiying Li, Zijian Gao et al.

AAAI 2024paperarXiv:2401.05899

Optimizing Local Satisfaction of Long-Run Average Objectives in Markov Decision Processes

David Klaska, Antonin Kucera, Vojtěch Kůr et al.

AAAI 2024paperarXiv:2312.12325
1
citations

Position: Automatic Environment Shaping is the Next Frontier in RL

Younghyo Park, Gabriel Margolis, Pulkit Agrawal

ICML 2024poster

Probabilistic Constrained Reinforcement Learning with Formal Interpretability

YANRAN WANG, QIUCHEN QIAN, David Boyle

ICML 2024poster

Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search through State Occupancy Regularization

Liam Schramm, Abdeslam Boularias

ICML 2024poster

Provably Robust DPO: Aligning Language Models with Noisy Feedback

Sayak Ray Chowdhury, Anush Kini, Nagarajan Natarajan

ICML 2024poster

Rate-Optimal Policy Optimization for Linear Markov Decision Processes

Uri Sherman, Alon Cohen, Tomer Koren et al.

ICML 2024poster

Reflective Policy Optimization

Yaozhong Gan, yan renye, zhe wu et al.

ICML 2024poster

ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages

Andrew Jesson, Christopher Lu, Gunshi Gupta et al.

ICML 2024poster

Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences

Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban et al.

ICML 2024poster

Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient

Ju-Hyun Kim, Seungki Min

ICML 2024poster

Safe Reinforcement Learning using Finite-Horizon Gradient-based Estimation

Juntao Dai, Yaodong Yang, Qian Zheng et al.

ICML 2024poster