A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

73 Scopus Citations
View graph of relations

Author(s)

  • Qiang Yang
  • Wei-Neng Chen
  • Tianlong Gu
  • Huaxiang Zhang
  • Huaqiang Yuan
  • Jun Zhang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number8683963
Pages (from-to)3393-3408
Journal / PublicationIEEE Transactions on Cybernetics
Volume50
Issue number7
Online published9 Apr 2019
Publication statusPublished - Jul 2020

Abstract

Large-scale optimization with high dimensionality and high computational cost becomes ubiquitous nowadays. To tackle such challenging problems efficiently, devising distributed evolutionary computation algorithms is imperative. To this end, this paper proposes a distributed swarm optimizer based on a special master-slave model. Specifically, in this distributed optimizer, the master is mainly responsible for communication with slaves, while each slave iterates a swarm to traverse the solution space. An asynchronous and adaptive communication strategy based on the request-response mechanism is especially devised to let the slaves communicate with the master efficiently. Particularly, the communication between the master and each slave is adaptively triggered during the iteration. To aid the slaves to search the space efficiently, an elite-guided learning strategy is especially designed via utilizing elite particles in the current swarm and historically best solutions found by different slaves to guide the update of particles. Together, this distributed optimizer asynchronously iterates multiple swarms to collaboratively seek the optimum in parallel. Extensive experiments on a widely used large-scale benchmark set substantiate that the distributed optimizer could: 1) achieve competitive effectiveness in terms of solution quality as compared to the state-of-the-art large-scale methods; 2) accelerate the execution of the algorithm in comparison with the sequential one and obtain almost linear speedup as the number of cores increases; and 3) preserve a good scalability to solve higher dimensional problems.

Research Area(s)

  • Distributed evolutionary algorithms, elite-guided learning (EGL), high-dimensional problems, large-scale optimization, particle swarm optimization (PSO)

Citation Format(s)

A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization. / Yang, Qiang; Chen, Wei-Neng; Gu, Tianlong et al.
In: IEEE Transactions on Cybernetics, Vol. 50, No. 7, 8683963, 07.2020, p. 3393-3408.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review