Energy-Efficient Heuristics for Job Assignment in Server Farms


Student thesis: Doctoral Thesis

View graph of relations


  • Jing FU

Related Research Unit(s)


Awarding Institution
Award date13 Jun 2016


The rapidly increasing energy consumption in modern server farms has driven studies to improve their energy efficiency. In this thesis, we analyze the stochastic job assignment in a server farm comprising servers with various speeds, power consumptions and buffer sizes. Ultimately, we seek to optimize the energy efficiency of server farms by controlling the carried load on the networked servers.
We consider two types of assignment policies, with and without jockeying. The jockeying in a multi-queue system indicates reassignment of incomplete jobs. The job assignment policies considered here select a server, which has a vacancy, to accept new arrival jobs. In the jockeying case, the reassignment of jobs that are being served in the system will also be considered by a job assignment policy. The energy efficiency of a server farm is defined as the ratio of its job throughput to its power consumption and represents the useful work per unit cost. The maximization of the ratio of job throughput to power consumption can be modeled as a (semi) Markov decision process, and an optimal policy is obtained by conventional dynamic programming techniques. However, the algorithm for an optimal solution in our multi-queue systems is constrained by its computational complexity, and is implemented as a baseline policy when comparing the performance of policies that are computationally feasible in a server farm with tens of thousands of servers.
For the jockeying case, we propose a robust job assignment policy called E*, to maximize the energy efficiency of a server farm already defined earlier. We model the server farm as a system of parallel finite-buffer processor-sharing queues with heterogeneous server speeds and energy consumption rates. We devise E* as an insensitive policy so that the stationary distribution of the number of jobs in the system depends on the job size distribution only through its mean. We provide a rigorous analysis of E* and compare it with a baseline approach, known as most energy-efficient server first (MEESF), that greedily chooses the most energy-efficient servers for job assignment. We show that E* has always a higher job throughput than that of MEESF, and derive realistic conditions under which E* is guaranteed to outperform MEESF in terms of energy efficiency. Extensive numerical results are presented to illustrate that E* can improve energy efficiency by up to 100%.
We also study the problem of job assignment in a large-scale realistically-dimensioned server farm comprising multiple processor-sharing servers with different service rates, energy consumption rates, and buffer sizes. Our aim is to optimize the energy efficiency of such a server farm by effectively controlling carried load on networked servers. To this end, we propose a job assignment policy, called Most energy-efficient available server first Accounting for Idle Power (MAIP), which is both scalable and near optimal. MAIP focuses on reducing the productive power used to support the processing service rate. Using the framework of semi-Markov decision process we show that, with exponentially distributed job sizes, MAIP is equivalent to the well-known Whittle’s index policy. This equivalence and the methodology of Weber and Weiss enable us to prove that, in server farms where a loss of jobs happens if and only if all buffers are full, MAIP is asymptotically optimal as the number of servers tends to infinity under certain conditions associated with the large number of servers as we have in a real server farm. Through extensive numerical simulations, we demonstrate the effectiveness of MAIP and its robustness to different job-size distributions, and observe that significant improvement in energy efficiency can be achieved by utilizing knowledge of energy consumption rate of idle servers.