TY - JOUR
T1 - Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning
AU - Jiang, Yi
AU - Liu, Lu
AU - Feng, Gang
PY - 2022/6/22
Y1 - 2022/6/22
N2 - This article investigates the adaptive optimal control problem for networked discrete-time nonlinear systems with stochastic packet dropouts in both controller-to-actuator and sensor-to-controller channels. A Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation is first developed to deal with the corresponding nonadaptive optimal control problem with known system dynamics and probability models of packet dropouts. The solvability of the nonadaptive optimal control problem is analyzed, and the stability and optimality of the resulting closed-loop system are proven. Two reinforcement learning (RL)-based policy iteration (PI) and value iteration (VI) algorithms are further developed to obtain the solution to the BHJB equation, and their convergence analysis is also provided. Furthermore, in the absence of a priori knowledge of partial system dynamics and probabilities of packet dropouts, two more online RL-based PI and VI algorithms are developed by using critic–actor approximators and packet dropout probability estimator. It is shown that the concerned adaptive optimal control problem can be solved by the proposed online RL-based PI and VI algorithms. Finally, simulation studies of a single-link manipulator are provided to illustrate the effectiveness of the proposed approaches.
AB - This article investigates the adaptive optimal control problem for networked discrete-time nonlinear systems with stochastic packet dropouts in both controller-to-actuator and sensor-to-controller channels. A Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation is first developed to deal with the corresponding nonadaptive optimal control problem with known system dynamics and probability models of packet dropouts. The solvability of the nonadaptive optimal control problem is analyzed, and the stability and optimality of the resulting closed-loop system are proven. Two reinforcement learning (RL)-based policy iteration (PI) and value iteration (VI) algorithms are further developed to obtain the solution to the BHJB equation, and their convergence analysis is also provided. Furthermore, in the absence of a priori knowledge of partial system dynamics and probabilities of packet dropouts, two more online RL-based PI and VI algorithms are developed by using critic–actor approximators and packet dropout probability estimator. It is shown that the concerned adaptive optimal control problem can be solved by the proposed online RL-based PI and VI algorithms. Finally, simulation studies of a single-link manipulator are provided to illustrate the effectiveness of the proposed approaches.
KW - Adaptation models
KW - Adaptive optimal control
KW - Adaptive systems
KW - Approximation algorithms
KW - Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation
KW - Closed loop systems
KW - Heuristic algorithms
KW - Mathematical models
KW - networked discrete-time nonlinear systems
KW - Optimal control
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85133752678&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85133752678&origin=recordpage
U2 - 10.1109/TNNLS.2022.3183020
DO - 10.1109/TNNLS.2022.3183020
M3 - 21_Publication in refereed journal
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
SN - 2162-237X
ER -