On Mobile Flash Storage Optimization

關於移動閃存存儲的優化

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date24 Nov 2017

Abstract

Recent years have observed an exponential growth of mobile devices including smartphones, tablets, and wearable devices. For mobile devices, NAND flash memory is the primary choice of data storage due to its high performance and low power consumption.

However, compared to flash-based solid state drives, mobile flash storage does not have the luxury of sophisticated hardware and firmware features because of the resource-constraint limits. Mobile flash storages are equipped with scarce built-in RAM, slow embedded processors and low-cost flash memories, all of which make it a challenge to optimize the I/O system for mobile devices. These factors motivate to revisit system design dedicating to mobile flash storage.

This thesis focuses on the I/O performance improvements for mobile devices. It consists of three parts: 1) a study of fragmentation at file system; 2) a scheduling approach with mapping cache awareness at I/O scheduler; 3) a lightweight compression approach for reduced write pressure at flash translation layer. The first two topics are explored at the host system and the third one is studied at the storage device.

In the first part, motivated by the observation that mobile devices could endure sluggish response over time, an empirical study on the I/O performance drops is conducted on Android phones. With a series of investigation and evaluation, fragmentation in the file system is identified to contribute noticeable management overheads on I/O performance of mobile devices. Considering the characteristics of mobile flash storage, several dedicated solutions are suggested to resolve file fragmentation for mobile devices. In the second part, a novel I/O scheduling approach is proposed to improve demand-based page-level mapping cache performance for mobile devices. This technique generates mapping cache friendly I/O workloads by strengthening I/O locality at host I/O scheduler. First, hit prioritized I/O scheduling scheme assigns higher priorities to the I/O requests whose address translation can be resolved without any cache misses. Second, request batched I/O scheduling scheme groups I/O requests with related logical addresses to exploit sequential hits in the mapping cache. In the third part, a lightweight data compression technique at the flash controller is proposed to reduce write pressure on mobile flash storage. It first characterizes data compressibility based on real smartphones, and the analysis shows that write traffics bound to mobile storage volumes are highly compressible. To reduce the impact of compression time, selective compression is proposed to compress the data with high compressibility while bypassing the data with low compressibility to save compression time. Besides, a compression-aware garbage collection policy is introduced to compress garbage-collected data in the background for long-term improvement.

On the basis of these optimizations, the mobile flash storage system can be optimized to exhibit improved I/O performance.