TY - GEN
T1 - Edge-MSL
T2 - 2024 IEEE Conference on Computer Communications, INFOCOM 2024
AU - Kim, Taejin
AU - Zuo, Jinhang
AU - Zhang, Xiaoxi
AU - Joe-Wong, Carlee
PY - 2024
Y1 - 2024
N2 - The emergence of 5G technology and edge computing enables the collaborative use of data by mobile users for scalable training of machine learning models. Privacy concerns and communication constraints, however, can prohibit users from offloading their data to a single server for training. Split learning, in which models are split between end users and a central server, somewhat resolves these concerns but requires exchanging information between users and the server in each local training iteration. Thus, splitting models between end users and geographically close edge servers can significantly reduce communication latency and training time. In this setting, users must decide to which edge servers they should offload part of their model to minimize the training latency, a decision that is further complicated by the presence of multiple, mobile users competing for resources. We present Edge-MSL, a novel formulation of the mobile split learning problem as a contextual multi-armed bandits framework. To counter scalability challenges with a centralized Edge-MSL solution, we introduce a distributed solution that minimizes competition between users for edge resources, reducing regret by at least two times compared to a greedy baseline. The distributed Edge-MSL approach improves trained model convergence with a 15% increase in test accuracy. © 2024 IEEE.
AB - The emergence of 5G technology and edge computing enables the collaborative use of data by mobile users for scalable training of machine learning models. Privacy concerns and communication constraints, however, can prohibit users from offloading their data to a single server for training. Split learning, in which models are split between end users and a central server, somewhat resolves these concerns but requires exchanging information between users and the server in each local training iteration. Thus, splitting models between end users and geographically close edge servers can significantly reduce communication latency and training time. In this setting, users must decide to which edge servers they should offload part of their model to minimize the training latency, a decision that is further complicated by the presence of multiple, mobile users competing for resources. We present Edge-MSL, a novel formulation of the mobile split learning problem as a contextual multi-armed bandits framework. To counter scalability challenges with a centralized Edge-MSL solution, we introduce a distributed solution that minimizes competition between users for edge resources, reducing regret by at least two times compared to a greedy baseline. The distributed Edge-MSL approach improves trained model convergence with a 15% increase in test accuracy. © 2024 IEEE.
UR - http://www.scopus.com/inward/record.url?scp=85201790812&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85201790812&origin=recordpage
U2 - 10.1109/INFOCOM52122.2024.10621231
DO - 10.1109/INFOCOM52122.2024.10621231
M3 - RGC 32 - Refereed conference paper (with host publication)
T3 - Proceedings - IEEE INFOCOM
SP - 391
EP - 400
BT - IEEE INFOCOM 2024 - IEEE Conference on Computer Communications
PB - IEEE
Y2 - 20 May 2024 through 23 May 2024
ER -