Tail-Learning: Adaptive Learning Method for Mitigating Tail Latency in Autonomous Edge Systems

Note: We don't have the ability to review paper

PubDate: Dec 2023

Teams:Zhejiang University

Writers:Cheng Zhang, Yinuo Deng, Hailiang Zhao, Tianlv Chen, Shuiguang Deng

PDF:Tail-Learning: Adaptive Learning Method for Mitigating Tail Latency in Autonomous Edge Systems


In the realm of edge computing, the increasing demand for high Quality of Service (QoS), particularly in dynamic multimedia streaming applications (e.g., Augmented Reality/Virtual Reality and online gaming), has prompted the need for effective solutions. Nevertheless, adopting an edge paradigm grounded in distributed computing has exacerbated the issue of tail latency. Given a limited variety of multimedia services supported by edge servers and the dynamic nature of user requests, employing traditional queuing methods to model tail latency in distributed edge computing is challenging, substantially exacerbating head-of-line (HoL) blocking. In response to this challenge, we have developed a learning-based scheduling method to mitigate the overall tail latency, which adaptively selects appropriate edge servers for execution as incoming distributed tasks vary with unknown size. To optimize the utilization of the edge computing paradigm, we leverage Laplace transform techniques to theoretically derive an upper bound for the response time of edge servers. Subsequently, we integrate this upper bound into reinforcement learning to facilitate tail learning and enable informed decisions for autonomous distributed scheduling. The experiment results demonstrate the efficiency in reducing tail latency compared to existing methods.

You may also like...