Transforming Cooling Optimization for Green Data Center via Deep Reinforcement Learning

09/15/2017
by   Yuanlong Li, et al.
0

Cooling system plays a key role in modern data center. Developing an optimal control policy for data center cooling system is a challenging task. The prevailing approaches often rely on approximated system models that are built upon the knowledge of mechanical cooling, electrical and thermal management, which is difficult to design and may lead to sub-optimal or unstable performances. In this paper we propose to utilize the large amount of monitoring data in data center to optimize the control policy. To do so, we cast the cooling control policy design into an energy cost minimization problem with temperature constraints, and tab it into the emerging deep reinforcement learning (DRL) framework. Specifically, we propose an end-to-end neural control algorithm that is based on the actor-critic framework and the deep deterministic policy gradient (DDPG) technique. To improve the robustness of the control algorithm, we test various DRL related optimization techniques, such as recurrent decision making, discounted return, different neural network architectures, and different stochastic gradient descent algorithms, and adding additional constraints on the output of the policy network. We evaluate the proposed algorithms on the EnergyPlus simulation platform and on a real data trace collected from the National Super Computing Centre (NSCC) of Singapore. Our results show that the proposed end-to-end cooling control algorithm can achieve about 10 a canonical two stage optimization algorithm; and it can achieve about 13.6 cooling energy saving on the NSCC data trace. Furthermore, it shows high accuracy in predicting the temperature of the racks (with mean absolute error 0.1 degree) and can control the temperature of the data center zone close to the predefined threshold with variation lower to 0.2 degree.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset