Scheduling with Calibrations, Fairness and Prediction Advice


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date26 Aug 2022


Scheduling algorithms have a wide range of real-life applications, aiming to allocate resources to tasks and optimize a particular goal while satisfying certain constraints. Many real-life optimizations or decision problems can be formulated into scheduling models. In different application scenarios, the goals we care about are different. For example, when allocating resources to agents, we need to consider fairness. When assigning resources to machines, we only want a specific goal to be optimized. In this thesis, we mainly consider the application of scheduling algorithms in the following three scenarios: industrial scheduling, algorithmic fairness, and machine learning. For each application, we design efficient algorithms with provable guarantees.

In the first part, we focus on how to schedule jobs when the machines are not always reliable. Specifically, the machines only process jobs in a high-precision way if the machines stay calibrated. The calibrated state has a fixed time length, and all jobs must be executed in such a state. The goal is to use the minimum number of calibrations to schedule all jobs without violating their timing constraints. We consider the most general timing constraint. Specifically, each job can only be processed in a specific time slot set, and the time slots in the set may not be consecutive. We prove that the problem is set-cover hard and give an optimal greedy algorithm. In addition, we also consider throughput maximization, where we want to schedule as many jobs as possible using at most a given number of calibrations. We also provide a greedy algorithm for the problem and show that it is optimal.

In the second part, we propose a novel scheduling model with fairness, in which a set of jobs is required to be allocated to a group of agents. Different agents may have different valuations for the same job. The set of jobs assigned to an agent must be feasible. That is, the jobs can be scheduled in a non-preemptive way without violating their timing constraints. We aim to find a fair way to allocate jobs to agents. We investigate two fairness measurements commonly used in computational economics: Envy free up to one (EF1) and Maximin share (MMS). An EF1 allocation requires that one agent does not envy the other if one item is removed from the other's bundle. An MMS allocation requires that each agent gets a bundle that is at least as good as her threshold. An empty allocation is trivially EF1, and thus we want the allocation to be Pareto optimal (PO) simultaneously. A PO allocation requires that no agent can get more items without hurting other agents' bundles. For maximin share, we prove that a constant approximation maximin share allocation is guaranteed to exist, and it can be computed efficiently by losing some factor on the approximation. For EF1 and PO, we prove that EF1 and PO cannot be satisfied simultaneously, even for the case where all agents have the same valuation for the same job, and the timing constraints form an interval graph. In contrast, they are compatible and can be computed efficiently if the job length is unit.

In the last part, we aim to study how to incorporate machine learning advice to improve the competitive ratio for the classical online scheduling problem. We look into the dynamic TCP acknowledgment problem motivated by the well-known Transmission Control Protocol in computer networks. In this problem, data packets arrive over time, where the number of packets is hidden for the algorithm, and the server needs to acknowledge them. The server can either send an acknowledgment as soon as receiving the packets or wait for several time slots and acknowledge the accumulated packets together; the former will have more acknowledgment cost while the latter will incur some delay cost. The goal is to determine the acknowledgment sending time to minimize the total delay and acknowledgment cost. In the learning-augmented setting, the algorithm is additionally given a prediction of the number of packets. The ratio of the learning-augmented algorithm is defined as the algorithm's performance over the optimal solution, where the algorithm's performance is a function that depends on the prediction. When the prediction is perfect, the value of the ratio function is called the consistency ratio, and it is defined as the robustness ratio when the prediction is arbitrarily bad. For this problem, we design an algorithm that achieves the optimal consistency ratio and robustness ratio simultaneously. Specifically, our algorithm can find the near-optimal offline solution when the prediction is perfect. At the same time, the performance of our algorithm is still arbitrarily close to the optimal online algorithm when the prediction is untrustable.

    Research areas

  • Scheduling, Fairness, Prediction, Approximation, Hardness