Towards Building An Adaptive Distributed Computation Framework for Massive Context Interplay

Project: Research

View graph of relations

Description

Distributed computation is projected as one of the key technology trends by Gartner, and has gained momentum since the past decade, ranging from efficiently training largescale foundation models, and collaborative driving strategy learning for future autonomous driving, to precision healthcare and federated wearable devices. Distributed computation is largely influenced by massive emerging contexts, i.e., different environmental conditions or situations, such as time, location, communication bandwidth constraints, privacy requirements, and heterogeneity in data statistics and resources. The interplay among these contexts often influences system performance, especially when the number of contexts becomes large. For example, reduced communication among clients may help to protect privacy but may degrade computation performance. However, carefully examining the context interplay is tractable for only a few cases with a small number of contexts, and it is urgent to build an adaptive distributed computation framework suitable for massive context interplay. To tackle these challenges, this project will inspect two tractable examples with fewer contexts, gain a deep understanding of the communication-computation-privacy/ timeliness interplay, bring new perspectives towards looking into the massive context interplay problem, and finally aim to build a comprehensive distributed model learning framework through an online contextual learning approach. This project plans to perform the following distributed computation tasks towards this objective. 1) We plan to design a novel random compression mechanism via compressing the exchanged model parameters to meet the dynamically changing differential privacy requirement across time and clients while retaining satisfactory distributed computation performance (e.g., learning convergence). 2) To support the timely model update in a communication-constrained distributed learning system, we propose to design an effective scheduling scheme to determine clients' participation in sharing models. 3) Bypassing the analysis of the complicated massive context interplay, we aim to devise a general online contextual learning approach that can gradually find an optimal model from aggregating a number of existing models. In addition, we plan to evaluate our proposed framework via proof-of-concept experiments in both general neural network training and federated healthcare diagnosis tasks. This project brings together knowledge and methodologies in networking, machine learning, information theory, and communications to achieve a theoretical understanding of the adaptive distributed computation framework with algorithmic design and performance analysis, to test it in proof-of-concept experiments, and in turn, to provide insights to improve the framework. We strongly believe that this project will greatly inspire the related research community.

Detail(s)

Project number9043555
Grant typeGRF
StatusActive
Effective start/end date1/01/24 → …