Network Robustness Prediction: Influence of Training Data Distributions

Yang Lou*, Chengpei Wu, Junli Li*, Lin Wang, Guanrong Chen

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

6 Citations (Scopus)

Abstract

Network robustness refers to the ability of a network to continue its functioning against malicious attacks, which is critical for various natural and industrial networks. Network robustness can be quantitatively measured by a sequence of values that record the remaining functionality after a sequential node- or edge-removal attacks. Robustness evaluations are traditionally determined by attack simulations, which are computationally very time-consuming and sometimes practically infeasible. The convolutional neural network (CNN)-based prediction provides a cost-efficient approach to fast evaluating the network robustness. In this article, the prediction performances of the learning feature representation-based CNN (LFR-CNN) and PATCHY-SAN methods are compared through extensively empirical experiments. Specifically, three distributions of network size in the training data are investigated, including the uniform, Gaussian, and extra distributions. The relationship between the CNN input size and the dimension of the evaluated network is studied. Extensive experimental results reveal that compared to the training data of uniform distribution, the Gaussian and extra distributions can significantly improve both the prediction performance and the generalizability, for both LFR-CNN and PATCHY-SAN, and for various functionality robustness. The extension ability of LFR-CNN is significantly better than PATCHY-SAN, verified by extensive comparisons on predicting the robustness of unseen networks. In general, LFR-CNN outperforms PATCHY-SAN, and thus LFR-CNN is recommended over PATCHY-SAN. However, since both LFR-CNN and PATCHY-SAN have advantages for different scenarios, the optimal settings of the input size of CNN are recommended under different configurations.

© 2023 IEEE.
Original languageEnglish
Article number10130828
Pages (from-to)13496-13507
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number10
Online published23 May 2023
DOIs
Publication statusPublished - Oct 2024

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62002249 and Grant 61873167 and in part by the Hong Kong Research Grants Council through the General Research Funds (GRF) under Grant CityU11206320

Research Keywords

  • Complex network
  • convolutional neural network (CNN)
  • learning feature representation (LFR)
  • prediction
  • robustness
  • CONTROLLABILITY

Fingerprint

Dive into the research topics of 'Network Robustness Prediction: Influence of Training Data Distributions'. Together they form a unique fingerprint.

Cite this