Hybrid MDP based integrated hierarchical Q-learning

ChunLin Chen, DaoYi Dong, Han-Xiong Li, Tzyh-Jong Tarn

    Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

    19 Citations (Scopus)

    Abstract

    As a widely used reinforcement learning method, Q-learning is bedeviled by the curse of dimensionality: The computational complexity grows dramatically with the size of state-action space. To combat this difficulty, an integrated hierarchical Q-learning framework is proposed based on the hybrid Markov decision process (MDP) using temporal abstraction instead of the simple MDP. The learning process is naturally organized into multiple levels of learning, e. g., quantitative (lower) level and qualitative (upper) level, which are modeled as MDP and semi-MDP (SMDP), respectively. This hierarchical control architecture constitutes a hybrid MDP as the model of hierarchical Q-learning, which bridges the two levels of learning. The proposed hierarchical Q-learning can scale up very well and speed up learning with the upper level learning process. Hence this approach is an effective integral learning and control scheme for complex problems. Several experiments are carried out using a puzzle problem in a gridworld environment and a navigation control problem for a mobile robot. The experimental results demonstrate the effectiveness and efficiency of the proposed approach. © 2011 Science China Press and Springer-Verlag Berlin Heidelberg.
    Original languageEnglish
    Pages (from-to)2279-2294
    JournalScience China Information Sciences
    Volume54
    Issue number11
    DOIs
    Publication statusPublished - Nov 2011

    Research Keywords

    • hierarchical Q-learning
    • hybrid MDP
    • reinforcement learning
    • temporal abstraction

    Fingerprint

    Dive into the research topics of 'Hybrid MDP based integrated hierarchical Q-learning'. Together they form a unique fingerprint.

    Cite this