Evaluate Inference Attacks: Attack and Defense against 2D Semantic Segmentation Models

Yihan LIAO, Jacky KEUNG, Jingyu ZHANG*, Yurou DAI, Shuo LIU

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Deep learning (DL)-based 2D semantic segmentation (SS) plays a vital role in the perception task of autonomous driving. However, the SS model relies on DL, which makes it vulnerable to inference attacks. Recent research has discovered that SS models are susceptible to the membership inference attack, yet other inference attacks remain underexplored. Our study fills this gap by comprehensively investigating the vulnerabilities of two widely used RGB image-based 2D SS models (DeepLabV3 and DeepLabV3+) against three inference attacks: membership inference, attribute inference, and model inversion. We evaluate the attack effectiveness on three backbones (MobileNetV2, ResNet50, and ResNet101) across three datasets (VOC2012, CityScapes, and ADE20K), where the attack accuracy can reach up to 95% (membership inference), 40% (attribute inference), and 70% (model inversion), revealing that deeper networks are more prone to privacy leakage in inference attacks. Consequently, we introduce differential privacy and model pruning as defensive mechanisms, significantly reducing attack performance, where the average accuracy drops 20% among the three inference attacks. Our findings reveal critical privacy vulnerabilities in SS tasks and offer practical guidance for developing more robust SS models in autonomous driving. © 2025 Copyright held by the owner/author(s).
Original languageEnglish
JournalACM Transactions on Autonomous and Adaptive Systems
Online published25 Apr 2025
DOIs
Publication statusOnline published - 25 Apr 2025

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Research Keywords

  • Semantic segmentation
  • Inference attack
  • Differential privacy
  • Model pruning

Fingerprint

Dive into the research topics of 'Evaluate Inference Attacks: Attack and Defense against 2D Semantic Segmentation Models'. Together they form a unique fingerprint.

Cite this