Abstract
In this article, we investigate the optimal output tracking problem for linear discrete-time systems with unknown dynamics using reinforcement learning (RL) and robust output regulation theory. This output tracking problem only allows to utilize the outputs of the reference system and the controlled system, rather than their states, and differs from most existing works that depend on the state of the system. The optimal tracking problem is formulated into a linear quadratic regulation problem by proposing a family of dynamic discrete-time controllers. Then, it is shown that solving the output tracking problem is equivalent to solving output regulation equations, whose solution, however, requires the knowledge of the complete and accurate system dynamics. To remove such a requirement, an off-policy RL algorithm is proposed using only the measured output data along the trajectory of the system and the reference output. By introducing reexpression error and analyzing the rank condition of the parameterization matrix, we ensure the uniqueness of the proposed RL-based optimal control via output feedback.
| Original language | English |
|---|---|
| Pages (from-to) | 2391-2398 |
| Journal | IEEE Transactions on Automatic Control |
| Volume | 68 |
| Issue number | 4 |
| Online published | 5 May 2022 |
| DOIs | |
| Publication status | Published - Apr 2023 |
Research Keywords
- adaptive optimal control
- Optimal control
- Output feedback
- output tracking
- Process control
- Regulation
- Reinforcement learning (RL)
- robust output regulation
- Standards
- System dynamics
- Trajectory
Fingerprint
Dive into the research topics of 'Robust Output Regulation and Reinforcement Learning-based Output Tracking Design for Unknown Linear Discrete-Time Systems'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver