Abstract
This paper develops a security mechanism against inference attacks for industrial systems where an adversary with access to the states of a (linear or nonlinear) system attempts to infer the system model using the state values. Under an inference attack, an adversary with access to the sensor measurements of the system attempts to infer the system's parameters. The proposed security mechanism consists of two components: (i) a collection of feedback control gains, (ii) a randomized gain selection policy. To mitigate the inference attack, the gain selection policy randomly selects a feedback gain from the set of available feedback control gains at regular intervals. We cast the optimal design of the gain selection policy as an optimization problem such that (i) quadratic control cost is minimized and (ii) the uncertainty level of the adversary about selected control gain is maximized. In our formulation, the uncertainty level of the adversary about the control gain is captured by the Kullback-Leibler (KL) divergence between a uniform distribution and the posterior distribution of the feedback gains, given the history of the system states. We first derive the backward Bellman optimality equation for the gain selection problem and study the structural properties of the optimal gain selection policy. Our results show that the optimal gain selection policy only depends on the current state of the system, rather than the entire history of the states, which renders the optimal gain selection problem to a nonlinear Markov decision process. Next, we derive a policy gradient theorem for the gain selection problem, which provides an expression for the gradient of the objective function of the gain selection problem with respect to the parameter of a stationary (time-invariant) policy. The policy gradient theorem allows us to develop a stochastic gradient descent algorithm for computing an optimal policy. We finally demonstrate the effectiveness of our results for different linear and nonlinear systems. Our results indicate that the proposed security mechanism significantly decreases the inference ability of the adversary, while having a negligible impact on the control cost. © 2025 IEEE.
| Original language | English |
|---|---|
| Pages (from-to) | 10465-10479 |
| Number of pages | 15 |
| Journal | IEEE Transactions on Information Forensics and Security |
| Volume | 20 |
| Online published | 6 Oct 2025 |
| DOIs | |
| Publication status | Published - 2025 |
Funding
This work was supported in part by Hong Kong Research Grants Council under Project CityU 21208921 and Project CityU 11210024 and in part by Chow Sang Sang Group Research Fund.
Research Keywords
- Security
- Prevention and mitigation
- History
- Uncertainty
- Nonlinear systems
- Numerical models
- Costs
- Computational modeling
- Classification algorithms
- Urban areas
- Cyber-physical security
- inference attack
- Kullback-Leibler (KL) divergence
- moving target framework
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'A Security Mechanism Against Inference Attacks on Networked Systems'. Together they form a unique fingerprint.-
GRF: Privacy-aware Design of Distributed Networked Control Systems: A Directed Information Approach
NEKOUEI, E. (Principal Investigator / Project Coordinator)
1/01/25 → …
Project: Research
-
ECS: Optimal Privacy-aware Design of Networked Control Systems: An Information-theoretic Approach
NEKOUEI, E. (Principal Investigator / Project Coordinator)
1/01/22 → 18/11/25
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver