Artificial Intelligence through Crowdsourced Computation and Parallel Computing


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
  • Chung CHAN (Supervisor)
  • Chee Wei TAN (External person) (External Co-Supervisor)
Award date15 Dec 2022


Computers and human minds are collaborating more closely than ever before to solve complex problems. For example, crowdsourced computation and parallel computing can be used to improve computational models and accelerate computation progress separately. Inspired by this, in this thesis, we present three related works focusing on utilizing human intelligence to enhance the raw computational power of computers.

Crowdsourced data is widely used as feedback to adjust modern systems. One specific area is using it in blended learning education, which requires alternating between asynchronous pre-class and synchronous in-class activities. Subject to constraints on desired learning outcome specifications and individual student preferences, can we jointly optimize pre-class and in-class tasks to improve the two-way interaction between students and the instructor? To solve this problem, we format the students' learning progress as a linear program and develop a mobile chatbot software integrated with crowdsourced feedback data in a real blended classroom. Besides that, we also explore using crowdsourced data in the machine learning education area. We propose a novel chatbot-server framework for students to program and train their game AI with crowdsourced data.

The second topic focuses on a parallel computing system where machines and the network suffer from non-negligible faults, which may lead to system crashes. The traditional method to increase reliability is by restarting failed jobs. To avoid unnecessary time wasted on reboots, we propose an optimal scheduling strategy to enable fault-tolerant reliable computation to protect computation integrity. We propose optimization-based algorithms to efficiently construct the optimal coding matrices subject to fault tolerance specifications. Performance evaluation demonstrates that optimal scheduling effectively reduces the overall running time of parallel computing while resisting wide-ranging failure rates.