TY - JOUR
T1 - When Storage Response Time Catches Up With Overall Context Switch Overhead, What Is Next?
AU - Wu, Chun-Feng
AU - Chang, Yuan-Hao
AU - Yang, Ming-Chang
AU - Kuo, Tei-Wei
PY - 2020/11
Y1 - 2020/11
N2 - The virtual memory technique provides a large and cheap memory space by extending the memory space with storage devices. It applies context switch to asynchronously swapping pages between memory and storage devices for hiding the long response time of storage devices when a page fault occurs. However, the overall context switch overhead is high because the context switch itself is a complex function and would further incur TLB shootdown/flush and compulsory CPU cache misses after context switches. On the contrary, as the rapid responsiveness improvement of high-end storage devices, we observe that the response time of high-end storage devices catches up and gradually becomes smaller than the overall context switch overhead. At this turning point, to further enhance the system responsiveness, we advocate adopting synchronous swapping rather than context switch in response to page faults. Meanwhile, we propose a strategy, called shadow huge page management, to further improve the overall system performance by minimizing the overall time overheads caused by page faults and page swappings. Evaluation results show that the proposed system can efficiently reduce the total CPU wasting time.
AB - The virtual memory technique provides a large and cheap memory space by extending the memory space with storage devices. It applies context switch to asynchronously swapping pages between memory and storage devices for hiding the long response time of storage devices when a page fault occurs. However, the overall context switch overhead is high because the context switch itself is a complex function and would further incur TLB shootdown/flush and compulsory CPU cache misses after context switches. On the contrary, as the rapid responsiveness improvement of high-end storage devices, we observe that the response time of high-end storage devices catches up and gradually becomes smaller than the overall context switch overhead. At this turning point, to further enhance the system responsiveness, we advocate adopting synchronous swapping rather than context switch in response to page faults. Meanwhile, we propose a strategy, called shadow huge page management, to further improve the overall system performance by minimizing the overall time overheads caused by page faults and page swappings. Evaluation results show that the proposed system can efficiently reduce the total CPU wasting time.
KW - Context switch
KW - CPU busy waiting
KW - data prefetching
KW - huge page
KW - killer microsecond
KW - page swapping
KW - synchronous I/O completion designs
KW - ultralow latency (ULL) devices
KW - virtual memory management
KW - Context switch
KW - CPU busy waiting
KW - data prefetching
KW - huge page
KW - killer microsecond
KW - page swapping
KW - synchronous I/O completion designs
KW - ultralow latency (ULL) devices
KW - virtual memory management
KW - Context switch
KW - CPU busy waiting
KW - data prefetching
KW - huge page
KW - killer microsecond
KW - page swapping
KW - synchronous I/O completion designs
KW - ultralow latency (ULL) devices
KW - virtual memory management
UR - http://www.scopus.com/inward/record.url?scp=85096036506&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85096036506&origin=recordpage
U2 - 10.1109/TCAD.2020.3012322
DO - 10.1109/TCAD.2020.3012322
M3 - RGC 21 - Publication in refereed journal
SN - 0278-0070
VL - 39
SP - 4266
EP - 4277
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 11
M1 - 9211516
ER -