Anchor-Free Tracker Based on Space-Time Memory Network

Guang Han*, Chen Cao, Jixin Liu, Sam Kwong

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

6 Citations (Scopus)

Abstract

In the visual object tracking task, the existing trackers cannot well solve the appearance deformation, occlusion, and similar object interference, etc. To address these problems, this article proposes a new Anchor-free Tracker based on Space-time Memory Network (ATSMN). In this work, we innovatively use the space-time memory network, memory feature fusion network, and transformer feature cross fusion network. Through the synergy of above-mentioned innovations, tracker can make full use of temporal context information in the memory frames related to the object and better adapt to the appearance change of the object, which can obtain accurate classification and regression results. Extensive experimental results on challenging benchmarks show that ATSMN can achieve the SOTA level tracking performance compared with other advanced trackers. © 2022 IEEE
Original languageEnglish
Pages (from-to)73-83
JournalIEEE Multimedia
Volume20
Issue number1
Online published23 Sept 2022
DOIs
Publication statusPublished - Jan 2023

Research Keywords

  • Anchor-free
  • Data mining
  • Feature cross fusion
  • Feature extraction
  • Memory management
  • Object tracking
  • Space-time memory network
  • Task analysis
  • Transformers
  • Video sequences

Fingerprint

Dive into the research topics of 'Anchor-Free Tracker Based on Space-Time Memory Network'. Together they form a unique fingerprint.

Cite this