空 挡 广 告 位 | 空 挡 广 告 位

A Distributed Computation Offloading Scheduling Framework based on Deep Reinforcement Learning

Note: We don't have the ability to review paper

PubDate: December 2021

Teams: Beihang University;Texas A&M University

Writers: Bin Dai; Tao Ren; Jianwei Niu; Zheyuan Hu; Shucheng Hu; Meikang Qiu

PDF: A Distributed Computation Offloading Scheduling Framework based on Deep Reinforcement Learning

Abstract

Recent years have witnessed the rapid growth of smart devices and mobile applications. However, mobile applications are typically computation-intensive and delay-sensitive, while User Devices (UDs) are usually resource-limited. Mobile Edge Computing (MEC) has been proposed as a promising paradigm to mitigate the tension, where UDs’ tasks could be executed either locally on itself or remotely on the edge server via computation offloading. Lots of efficient computation offloading scheduling approaches have been proposed, whereas most of them are based on centralized scheduling which could face troubles in large-scale MEC. To address the issue, this paper proposes a distributed scheduling framework by leveraging the idea of ‘centralized training and distributed scheduling’. Furthermore, the Actor-Critic reinforcement learning is adopted to build the framework where the Actor and Critic play the roles of distributed scheduling and centralized training, respectively. Extensive simulations are conducted and the experimental results verify the effectiveness and efficiency of the proposed framework.

您可能还喜欢...

Paper