A Novel Topology Adaptation Strategy for Dynamic Sparse Training in Deep Reinforcement Learning

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages13
Journal / PublicationIEEE Transactions on Neural Networks and Learning Systems
Online published11 Sept 2024
Publication statusOnline published - 11 Sept 2024

Abstract

Deep reinforcement learning (DRL) has been widely adopted in various applications, yet it faces practical limitations due to high storage and computational demands. Dynamic sparse training (DST) has recently emerged as a prominent approach to reduce these demands during training and inference phases, but existing DST methods achieve high sparsity levels by sacrificing policy performance as they rely on the absolute magnitude of connections for pruning and randomly generating connections. Addressing this, our study presents a generic method that can be seamlessly integrated into existing DST methods in DRL to enhance their policy performance while preserving their sparsity levels. Specifically, we develop a novel method for calculating the importance of connections within the model. Subsequently, we dynamically adjust the sparse network topology by dropping existing connections and introducing new connections based on their respective importance values. Through validation on eight widely used simulation tasks, our method improves two state-of-the-art (SOTA) DST approaches by up to 70% in episode return and average return across all episodes under various sparsity levels.

© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.

Research Area(s)

  • Deep reinforcement learning (DRL), dynamic sparse training (DST), topology adaptation strategy