空 挡 广 告 位 | 空 挡 广 告 位

Constrained Reinforcement Learning for Adaptive Controller Synchronization in Distributed SDN

Note: We don't have the ability to review paper

PubDate: March 2024

Teams: Yale University

Writers: Ioannis Panitsas, Akrit Mudvari, Leandros Tassiulas

PDF: Constrained Reinforcement Learning for Adaptive Controller Synchronization in Distributed SDN

Abstract

In software-defined networking (SDN), the implementation of distributed SDN controllers, with each controller responsible for managing a specific sub-network or domain, plays a critical role in achieving a balance between centralized control, scalability, reliability, and network efficiency. These controllers must be synchronized to maintain a logically centralized view of the entire network. While there are various approaches for synchronizing distributed SDN controllers, most tend to prioritize goals such as optimization of communication latency or load balancing, often neglecting to address both the aspects simultaneously. This limitation becomes particularly significant when considering applications like Augmented and Virtual Reality (AR/VR), which demand constrained network latencies and substantial computational resources. Additionally, many existing studies in this field predominantly rely on value-based reinforcement learning (RL) methods, overlooking the potential advantages offered by state-of-the-art policy-based RL algorithms. To bridge this gap, our work focuses on examining deep reinforcement learning (DRL) techniques, encompassing both value-based and policy-based methods, to guarantee an upper latency threshold for AR/VR task offloading within SDN environments, while selecting the most cost-effective servers for AR/VR task offloading. Our evaluation results indicate that while value-based methods excel in optimizing individual network metrics such as latency or load balancing, policy-based approaches exhibit greater robustness in adapting to sudden network changes or reconfiguration.

您可能还喜欢...

Paper