Stanza : Layer Separation for Distributed Training in Deep Learning

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages12
Journal / PublicationIEEE Transactions on Services Computing
Publication statusOnline published - 6 Apr 2020

Abstract

The parameter server architecture is prevalently used for distributed deep learning. Each worker machine in a such system trains the complete model, which leads to a large amount of network data transfer between workers and servers. We empirically observe that the data transfer has a major impact on training time. We present a new distributed training system called Stanza to tackle this problem. Stanza exploits the fact that in many models such as convolution neural networks, most data exchange is attributed to the fully connected layers, while most computation is carried out in convolutional layers. Thus, we propose layer separation in distributed training: most nodes of the cluster train only the convolutional layers, while the rest train the fully connected layers. Gradients and parameters of the fully connected layers no longer need to be exchanged across the entire cluster, thereby substantially reducing the data transfer volume. We implement Stanza on PyTorch and evaluate its performance on Azure and EC2. Results show that Stanza accelerates training significantly over current parameter server systems: on EC2 instances with Tesla V100 GPU and 10Gb bandwidth for example, Stanza is 1.34x-13.9x faster for common deep learning models.

Research Area(s)

  • Distributed Training, Parameter Server, Deep Learning, Machine Learning