Essays on Offline Learning and Online Learning


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date1 Aug 2019


This thesis studies potential issues and applications of two closely related learning problems, an offline learning problem, the ranking and selection (R&S) problem, and an online learning problem, the multi-armed bandit (MAB) problem.

In the first essay of this thesis, we aim to solve large-scale R&S problems in parallel computing environments. On one hand, large-scale R&S problems require a large amount of computation. On the other hand, parallel computing environments that provide a large capacity for computation are becoming prevalent today and they are accessible by ordinary users. Therefore, solving large-scale R&S problems in parallel computing environments has emerged as an important research topic in recent years. However, directly implementing existing R&S procedures, stage-wise procedures or fully-sequential procedures, in parallel computing environments may encounter problems, because either the procedures require too many simulation observations or the procedures' selection structures induce too many comparisons and too frequent communications among the processors. In the first essay, inspired by the knockout-tournament arrangement of tennis Grand Slam tournaments, we develop new R&S procedures to solve large-scale problems in parallel computing environments. We show that, no matter whether the variances of the alternatives are known or not, our procedures can theoretically achieve the lowest growth rate on the expected total sample size with respect to the number of alternatives and thus are optimal in rate. Moreover, common random numbers (CRNs) can be easily adopted in our procedures to further reduce the total sample size. Meanwhile, the comparison time in our procedures is negligible compared to the simulation time, and our procedures barely request for communications among the processors.

In the second essay of this thesis, we investigate an application of the MAB problem. We study a dynamic pricing problem where the observed cost at each selling period varies from period to period while the demand function is unknown and independent of the observed cost. The decision maker needs to select a price from a menu of K prices in each period to maximize the expected cumulative profit. Motivated by the classical upper confidence bound (UCB) procedure for the MAB problem, we propose a UCB-Like policy to select the price. We examine the performances of the proposed policy under the different settings of the cost, i.e., the continuous cost and the discrete cost. We also develop a forced sampling (FS) policy for the case of continuous cost. We show that, even though the FS policy may have better theoretical results than the UCB-Like policy, it requires a problem-dependent parameter as an input and its practical performance appears worse than the UCB-Like policy for a reasonably large number of selling periods.