Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 3107-3120 |
Journal / Publication | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 35 |
Issue number | 3 |
Online published | 22 Jun 2022 |
Publication status | Published - Mar 2024 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85133752678&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(248dc002-3555-46f5-a406-9d10aafb7123).html |
Abstract
This article investigates the adaptive optimal control problem for networked discrete-time nonlinear systems with stochastic packet dropouts in both controller-to-actuator and sensor-to-controller channels. A Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation is first developed to deal with the corresponding nonadaptive optimal control problem with known system dynamics and probability models of packet dropouts. The solvability of the nonadaptive optimal control problem is analyzed, and the stability and optimality of the resulting closed-loop system are proven. Two reinforcement learning (RL)-based policy iteration (PI) and value iteration (VI) algorithms are further developed to obtain the solution to the BHJB equation, and their convergence analysis is also provided. Furthermore, in the absence of a priori knowledge of partial system dynamics and probabilities of packet dropouts, two more online RL-based PI and VI algorithms are developed by using critic–actor approximators and packet dropout probability estimator. It is shown that the concerned adaptive optimal control problem can be solved by the proposed online RL-based PI and VI algorithms. Finally, simulation studies of a single-link manipulator are provided to illustrate the effectiveness of the proposed approaches. © 2022 IEEE.
Research Area(s)
- Adaptation models, Adaptive optimal control, Adaptive systems, Approximation algorithms, Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation, Closed loop systems, Heuristic algorithms, Mathematical models, networked discrete-time nonlinear systems, Optimal control, reinforcement learning (RL)
Citation Format(s)
Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning. / Jiang, Yi; Liu, Lu; Feng, Gang.
In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 35, No. 3, 03.2024, p. 3107-3120.
In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 35, No. 3, 03.2024, p. 3107-3120.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available