Marek Cygan
5
Papers
18
Total Citations
Papers (5)
Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
ICML 2025
12
citations
Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners
NeurIPS 2025
6
citations
Decoupled Policy Actor-Critic: Bridging Pessimism and Risk Awareness in Reinforcement Learning
AAAI 2025
0
citations
Scaling Laws for Fine-Grained Mixture of Experts
ICML 2024
0
citations
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning
ICML 2024
0
citations