TY - JOUR
T1 - Speed scaling problems with memory/cache consideration
AU - Wu, Weiwei
AU - Li, Minming
AU - Wang, Kai
AU - Huang, He
AU - Chen, Enhong
PY - 2018/12
Y1 - 2018/12
N2 - Speed scaling problems consider energy-efficient job scheduling in processors by adjusting the speed to reduce energy consumption, where power consumption is a convex function of speed (usually, (Formula presented.)). In this work, we study speed scaling problems considering memory/cache. Each job needs some time for memory operation when it is fetched from memory,, and needs less time if fetched from the cache. The objective is to minimize energy consumption while satisfying the time constraints of the jobs. Two models are investigated, the non-cache model and the with-cache model. The non-cache model is a variant of the ideal model, where each job i needs a fixed (Formula presented.) time for its memory operation; the with-cache model further considers the cache, a memory device with much faster access time but limited space. The uniform with-cache model is a special case of the with-cache model in which all (Formula presented.) values are the same. We provide an (Formula presented.) time algorithm and an improved (Formula presented.) time algorithm to compute the optimal solution in the non-cache model. For the with-cache model, we prove that it is NP-complete to compute the optimal solution. For the uniform with-cache model with agreeable jobs (later-released jobs do not have earlier deadlines), we derive an (Formula presented.) time algorithm to compute the optimal schedule, while for the general case we propose a (Formula presented.)-approximation algorithm in a resource augmentation setting in which the memory operation time can accelerate by at most g times.
AB - Speed scaling problems consider energy-efficient job scheduling in processors by adjusting the speed to reduce energy consumption, where power consumption is a convex function of speed (usually, (Formula presented.)). In this work, we study speed scaling problems considering memory/cache. Each job needs some time for memory operation when it is fetched from memory,, and needs less time if fetched from the cache. The objective is to minimize energy consumption while satisfying the time constraints of the jobs. Two models are investigated, the non-cache model and the with-cache model. The non-cache model is a variant of the ideal model, where each job i needs a fixed (Formula presented.) time for its memory operation; the with-cache model further considers the cache, a memory device with much faster access time but limited space. The uniform with-cache model is a special case of the with-cache model in which all (Formula presented.) values are the same. We provide an (Formula presented.) time algorithm and an improved (Formula presented.) time algorithm to compute the optimal solution in the non-cache model. For the with-cache model, we prove that it is NP-complete to compute the optimal solution. For the uniform with-cache model with agreeable jobs (later-released jobs do not have earlier deadlines), we derive an (Formula presented.) time algorithm to compute the optimal schedule, while for the general case we propose a (Formula presented.)-approximation algorithm in a resource augmentation setting in which the memory operation time can accelerate by at most g times.
KW - Algorithm design
KW - DVS
KW - Energy efficiency
KW - Memory operation time
KW - Scheduling
KW - Speed scaling
UR - http://www.scopus.com/inward/record.url?scp=85048051878&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85048051878&origin=recordpage
U2 - 10.1007/s10951-018-0565-1
DO - 10.1007/s10951-018-0565-1
M3 - 21_Publication in refereed journal
VL - 21
SP - 633
EP - 646
JO - Journal of Scheduling
JF - Journal of Scheduling
SN - 1094-6136
IS - 6
ER -