Learning strategy for continuous robot visual control : A multi-objective perspective

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Detail(s)

Original languageEnglish
Article number109448
Number of pages15
Journal / PublicationKnowledge-Based Systems
Volume252
Online published23 Jul 2022
Publication statusPublished - 27 Sept 2022

Abstract

Robot visual control aims to achieve three general objectives, namely, smoothness, rapidity, and target keeping. In practice, such conflicting objectives make robot visual control, which is often formulated as a multi-objective optimization problem (MOP), difficult to achieve. Conventional solutions for MOP set constant weights to the objectives throughout the decision process. However, in practice, a robot focuses on different objectives in different motion phases. Thus, time-varying visual control is desired. Deep Reinforcement Learning (DRL) is a promising solution to handle time-varying decisions in the MOP domain. Renowned DRL solutions suffer from high computing costs and low data efficiency when handling real-time visual control. To satisfy the control requirements and improve the learning efficiency when applying DRL to robot visual control, a lightweight DRL solution, referred to as Fuzzy Cerebellar Actor-critic (FCAC), is developed in this paper. In FCAC, Fuzzy Coding is employed to represent continuous observations. The policy is evaluated by a set of embedding vectors consisting of weighted states. Then, based on the observation error, a stochastic Actor-critic policy is learned to compute a suitable continuous control gain. To evaluate the performance of the proposed FCAC in robust control, we have simulated different general robot tasks. Experimental results show that robots perform very well by the DRL-driven strategies with diverse controllers under noise interference. Meanwhile, the FCAC shows higher learning efficiency and lower cost compared to the existing DRL solutions in the MOP domain.

Research Area(s)

  • Robot visual control, Deep reinforcement learning, Multi-objective optimization problem, Fuzzy Cerebellar Actor–critic