Execution Repair for Spark Programs by Active Maintenance of Partition Dependency
基於分區依賴主動維護實現的Spark程序運行修復
Student thesis: Master's Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 5 Jan 2022 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(0f011485-c1c7-4001-8513-c42472f2f2f3).html |
---|---|
Other link(s) | Links |
Abstract
Programs running on a cluster of Spark nodes are widely used in practice. These programs generate many sets of intermediate datasets. Each set is called a data partition, where the partition instance contains the actual data records for processing. Keeping all these intermediate partition instances at the same time is impractical. Rather, these partition instances are ephemeral by default, i.e., they are deleted after use. Meanwhile, the programs need to explicitly code to reuse some generated partition instances to complete their subsequent computations in a reasonable time. However, at runtime, the underlying Spark platform may independently delete such instances or accidentally cause these instances inaccessible to program executions. Those inaccessible instances will invalidate the computation assumption made in the logics of these programs that such depending instances should be present, which leads to a performance bloat problem, where equivalent partition instances are excessively generated and deleted. Such problems may lead to memory corruption or even execution failure.
This thesis proposes a novel and effective framework, called FAR (Fine-grained Active Repair), to handle this class of performance bloat problem, which actively repairs executions by maintaining the instance dependencies in Spark program executions. FAR monitors the partition instance lifecycle activities, and determines whether each partition instance should be kept or removed according to its dependency relations in the current execution plan.
A typical Spark program execution consists of two alternating phases, the lineage graph construction phase, where the program builds dependencies between partitions, and the concrete execution phase, which is triggered by an action operation on target partitions and starts the computation of corresponding partition instances.
FAR computes the budget of each partition at the beginning of the concrete execution phase, where the budget is the outstanding uses of each partition involved directly or indirectly in computing target partitions. On handling the request of a newly generated partition instance, FAR decreases the budget of each partition that it depends on. When the budget of a partition is exhausted, FAR instructs Spark to delete the instance of that partition. On the contrary, if the budget of a partition is not exhausted, but no instance of the partition is found, FAR increases the budget of each partition that the former partition depends on. FAR also instructs Spark to keep the newly generated instance rather than deleting it right after its use.
This thesis validates the efficiency and effectiveness of FAR through a series of experiments. The experimental results show that, when some dependency partition instances are inaccessible, FAR achieves 7.3x-67.0x speedup compared to Spark. The results also reveal that the program executions actively repaired by FAR can run to successful completion in environments with 1.7x-2.0x fewer available memory than Spark.
In conclusion, this thesis makes two major contributions. First, it presents the first work to reveal a class of performance bloat that the equivalent partition instances are excessively produced. Second, it proposes FAR, a novel and effective framework, to systematically handle such performance bloats and actively repair the execution by maintaining the instance dependencies in Spark program executions.
This thesis proposes a novel and effective framework, called FAR (Fine-grained Active Repair), to handle this class of performance bloat problem, which actively repairs executions by maintaining the instance dependencies in Spark program executions. FAR monitors the partition instance lifecycle activities, and determines whether each partition instance should be kept or removed according to its dependency relations in the current execution plan.
A typical Spark program execution consists of two alternating phases, the lineage graph construction phase, where the program builds dependencies between partitions, and the concrete execution phase, which is triggered by an action operation on target partitions and starts the computation of corresponding partition instances.
FAR computes the budget of each partition at the beginning of the concrete execution phase, where the budget is the outstanding uses of each partition involved directly or indirectly in computing target partitions. On handling the request of a newly generated partition instance, FAR decreases the budget of each partition that it depends on. When the budget of a partition is exhausted, FAR instructs Spark to delete the instance of that partition. On the contrary, if the budget of a partition is not exhausted, but no instance of the partition is found, FAR increases the budget of each partition that the former partition depends on. FAR also instructs Spark to keep the newly generated instance rather than deleting it right after its use.
This thesis validates the efficiency and effectiveness of FAR through a series of experiments. The experimental results show that, when some dependency partition instances are inaccessible, FAR achieves 7.3x-67.0x speedup compared to Spark. The results also reveal that the program executions actively repaired by FAR can run to successful completion in environments with 1.7x-2.0x fewer available memory than Spark.
In conclusion, this thesis makes two major contributions. First, it presents the first work to reveal a class of performance bloat that the equivalent partition instances are excessively produced. Second, it proposes FAR, a novel and effective framework, to systematically handle such performance bloats and actively repair the execution by maintaining the instance dependencies in Spark program executions.