Learning Dynamic Memory Networks for Object Tracking

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

94 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2018
Subtitle of host publication15th European Conference, 2018, Proceedings
EditorsVittorio Ferrari, Martial Hebert, Cristian Sminchisescu
PublisherSpringer Verlag
Pages153-169
ISBN (electronic)9783030012403
ISBN (print)9783030012397
Publication statusPublished - Sept 2018

Publication series

NameLecture Notes in Computer Science
Volume11213
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Conference

Title15th European Conference on Computer Vision (ECCV 2018)
PlaceGermany
CityMunich
Period8 - 14 September 2018

Abstract

Template-matching methods for visual tracking have gained popularity recently due to their comparable performance and fast speed. However, they lack effective ways to adapt to changes in the target object’s appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target’s appearance variations during tracking. An LSTM is used as a memory controller, where the input is the search feature map and the outputs are the control signals for the reading and writing process of the memory block. As the location of the target is at first unknown in the search feature map, an attention mechanism is applied to concentrate the LSTM input on the potential target. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. Unlike tracking-by-detection methods where the object’s information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target’s appearance changes by updating the external memory. Moreover, unlike other tracking methods where the model capacity is fixed after offline training – the capacity of our tracker can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on OTB and VOT demonstrates that our tracker MemTrack performs favorably against state-of-the-art tracking methods while retaining real-time speed of 50 fps.

Research Area(s)

  • Addressable memory, Gated residual template learning

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

Learning Dynamic Memory Networks for Object Tracking. / Yang, Tianyu; Chan, Antoni B.
Computer Vision – ECCV 2018: 15th European Conference, 2018, Proceedings. ed. / Vittorio Ferrari; Martial Hebert; Cristian Sminchisescu. Springer Verlag, 2018. p. 153-169 (Lecture Notes in Computer Science; Vol. 11213).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review