TY - JOUR
T1 - Federated Learning over Multihop Wireless Networks with In-Network Aggregation
AU - Chen, Xianhao
AU - Zhu, Guangyu
AU - Deng, Yiqin
AU - Fang, Yuguang
PY - 2022/6
Y1 - 2022/6
N2 - Communication limitation at the edge is widely recognized as a major bottleneck for federated learning (FL). Multi-hop wireless networking provides a cost-effective solution to enhance service coverage and spectrum efficiency at the edge, which could facilitate large-scale and efficient machine learning (ML) model aggregation. However, FL over multi-hop wireless networks has rarely been investigated. In this paper, we optimize FL over wireless mesh networks by taking into account the heterogeneity in communication and computing resources at mesh routers and clients. We present a framework that each intermediate router performs in-network model aggregation before sending the data to the next hop, so as to reduce the outgoing data traffic and hence aggregate more models under limited communication resources. To accelerate model training, we formulate our optimization problem by jointly considering model aggregation, routing, and spectrum allocation. Although the problem is a non-convex mixed-integer nonlinear programming, we transform it into a mixed-integer linear programming (MILP), and develop a coarse-grained fixing procedure to solve it efficiently. Simulation results demonstrate the effectiveness of the solution approach, and the superiority of the in-network aggregation scheme over the counterpart without in-network aggregation. © 2022 IEEE.
AB - Communication limitation at the edge is widely recognized as a major bottleneck for federated learning (FL). Multi-hop wireless networking provides a cost-effective solution to enhance service coverage and spectrum efficiency at the edge, which could facilitate large-scale and efficient machine learning (ML) model aggregation. However, FL over multi-hop wireless networks has rarely been investigated. In this paper, we optimize FL over wireless mesh networks by taking into account the heterogeneity in communication and computing resources at mesh routers and clients. We present a framework that each intermediate router performs in-network model aggregation before sending the data to the next hop, so as to reduce the outgoing data traffic and hence aggregate more models under limited communication resources. To accelerate model training, we formulate our optimization problem by jointly considering model aggregation, routing, and spectrum allocation. Although the problem is a non-convex mixed-integer nonlinear programming, we transform it into a mixed-integer linear programming (MILP), and develop a coarse-grained fixing procedure to solve it efficiently. Simulation results demonstrate the effectiveness of the solution approach, and the superiority of the in-network aggregation scheme over the counterpart without in-network aggregation. © 2022 IEEE.
KW - edge computing
KW - Federated learning
KW - in-network aggregation
KW - multi-hop wireless network
KW - wireless mesh network
UR - http://www.scopus.com/inward/record.url?scp=85129377163&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85129377163&origin=recordpage
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:000809406400077
U2 - 10.1109/TWC.2022.3168538
DO - 10.1109/TWC.2022.3168538
M3 - RGC 21 - Publication in refereed journal
SN - 1536-1276
VL - 21
SP - 4622
EP - 4634
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 6
ER -