Graph Assisted Offline-Online Deep Reinforcement Learning for Dynamic Workflow Scheduling

5citations
Project
5
Citations
6
Authors
1
Data Points

Abstract

Dynamic workflow scheduling (DWS) in cloud computing presents substantial challenges due to heterogeneous machine configurations, unpredictable workflow arrivals/patterns, and constantly evolving environments. However, existing research often assumes homogeneous setups and static conditions, limiting flexibility and adaptability in real-world scenarios. In this paper, we propose a novelGraph assisted Offline-Online Deep Reinforcement Learning(GOODRL) approach to building an effective and efficient scheduling agent for DWS. Our approach features three key innovations: (1) atask-specificgraph representation and aGraph Attention Actor Networkthat enable the agent to dynamically assign focused tasks to heterogeneous machines while explicitly considering the future impact of each machine on these tasks; (2) asystem-orientedgraph representation and aGraph Attention Critic Networkthat facilitate efficient processing of new information and understanding its impact on the current state, crucial for managing unpredictable workflow arrivals/patterns in real-time; and (3) anoffline-onlinemethod that utilizes imitation learning for effective offline training and applies gradient control and decoupled high-frequency critic training techniques during online learning to sustain the agent’s robust performance in rapidly changing environments. Experimental results demonstrate that GOODRL significantly outperforms several state-of-the-art algorithms, achieving substantially lower mean flowtime and high adaptability in various online and offline scenarios.

Citation History

Jan 26, 2026
5