UniAda : Universal Adaptive Multiobjective Adversarial Attack for End-to-End Autonomous Driving Systems

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Reliability
Online published3 Jun 2024
Publication statusOnline published - 3 Jun 2024

Abstract

Adversarial attacks play a pivotal role in testing and improving the reliability of deep learning (DL) systems. Existing literature has demonstrated that subtle perturbations to the input can elicit erroneous outcomes, thereby substantially compromising the security of DL systems. This has emerged as a critical concern in the development of DL-based safety–critical systems like autonomous driving systems (ADSs). The focus of existing adversarial attack methods on end-to-end (E2E) ADSs has predominantly centered on misbehaviors of steering angle, which overlooks speed-related controls or imperceptible perturbations. To address these challenges, we introduce UniAda–a multiobjective white-box attack technique with a core function that revolves around crafting an image-agnostic adversarial perturbation capable of simultaneously influencing both steering and speed controls. UniAda capitalizes on an intricately designed multiobjective optimization function with the adaptive weighting scheme (AWS), enabling the concurrent optimization of diverse objectives. Validated with both simulated and real-world driving data, UniAda outperforms five benchmarks across two metrics, inducing steering and speed deviations from 3.54 ∘ to 29 ∘ and 11 to 22 km/h on average. This systematic approach establishes UniAda as a proven technique for adversarial attacks on modern DL-based E2E ADSs. © 2024 IEEE.

Research Area(s)

  • Adversarial attacks, autonomous driving, deep learning (DL), multiobjective optimization, white-box attacks