Skip to main navigation Skip to search Skip to main content

Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning

Yi Jiang, Lu Liu*, Gang Feng

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

102 Downloads (CityUHK Scholars)

Abstract

This article investigates the adaptive optimal control problem for networked discrete-time nonlinear systems with stochastic packet dropouts in both controller-to-actuator and sensor-to-controller channels. A Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation is first developed to deal with the corresponding nonadaptive optimal control problem with known system dynamics and probability models of packet dropouts. The solvability of the nonadaptive optimal control problem is analyzed, and the stability and optimality of the resulting closed-loop system are proven. Two reinforcement learning (RL)-based policy iteration (PI) and value iteration (VI) algorithms are further developed to obtain the solution to the BHJB equation, and their convergence analysis is also provided. Furthermore, in the absence of a priori knowledge of partial system dynamics and probabilities of packet dropouts, two more online RL-based PI and VI algorithms are developed by using critic–actor approximators and packet dropout probability estimator. It is shown that the concerned adaptive optimal control problem can be solved by the proposed online RL-based PI and VI algorithms. Finally, simulation studies of a single-link manipulator are provided to illustrate the effectiveness of the proposed approaches. © 2022 IEEE.
Original languageEnglish
Pages (from-to)3107-3120
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number3
Online published22 Jun 2022
DOIs
Publication statusPublished - Mar 2024

Research Keywords

  • Adaptation models
  • Adaptive optimal control
  • Adaptive systems
  • Approximation algorithms
  • Bernoulli model-based Hamilton–Jacobi–Bellman (BHJB) equation
  • Closed loop systems
  • Heuristic algorithms
  • Mathematical models
  • networked discrete-time nonlinear systems
  • Optimal control
  • reinforcement learning (RL)

Publisher's Copyright Statement

  • COPYRIGHT TERMS OF DEPOSITED POSTPRINT FILE: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Jiang, Y., Liu, L., & Feng, G. (2022). Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 35(3), 3197 – 3120. https://doi.org/10.1109/TNNLS.2022.3183020

Fingerprint

Dive into the research topics of 'Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning'. Together they form a unique fingerprint.

Cite this