Resource-Constrained Federated Learning
DescriptionA typical machine learning paradigm requires storing the training data on one computer or in one data center. When data privacy is aconcern, such a centralized approach is not applicable. To address this, federated learning was proposed, which enables collaborativeon-device learning without migrating private end-device data to a central server. However, the federated learning paradigm facesdaunting challenges when deploying state-of-the-art complex neural architectures to resource-constrained hardware platforms, inparticular, low-end Internet-of-Things (IoT) devices. This project is aimed at exploring hardware-efficient Artificial Intelligence(AI) techniques to support federated knowledge transfer across diverse IoT hardware platforms. Specifically, the proposedresearch consists of the following three thrusts. First, we will investigate how to design hardware-efficient AI from the neuralquantization perspective, to enable federated intelligence transfer among various hardware platforms. Second, we will exploresearching for optimal neural architectures in a data-agnostic manner to further strengthen hardware-efficient federatedintelligence. Finally, we will build a general-purpose testbed to rigorously validate the proposed research and expand the impact ofthis project by incorporating various hardware platforms into a wide range of AI applications.
|Effective start/end date||1/09/23 → …|