Video2mesh : 3D human pose and shape recovery by a temporal convolutional transformer network

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages10
Journal / PublicationIET Computer Vision
Online published27 Feb 2023
Publication statusOnline published - 27 Feb 2023

Abstract

From a 2D video of a person in action, human mesh recovery aims to infer the 3D human pose and shape frame by frame. Despite progress on video‐based human pose and shape estimation, it is still challenging to guarantee high accuracy and smoothness simultaneously. To tackle this problem, we propose a Video2mesh, a temporal convolutional transformer (TConvTransformer) based temporal network which is able to recover accurate and smooth human mesh from 2D video. The temporal convolution block achieves the sequence‐level smoothness by aggregating image features from adjacent frames. The subsequent multi‐attention transformer improves the accuracy due to its multi‐subspace for better middle‐frame feature representation. Meanwhile, we add a TConvTransformer discriminator which is trained together with our 3D human mesh temporal encoder. This TConvTransformer discriminator further improves the accuracy and smoothness by restricting the pose and shape in a more reliable space based on the AMASS dataset. We conduct extensive experiments on three standard benchmark datasets and show that our proposed Video2mesh outperforms other state‐of‐the‐art methods in both accuracy and smoothness.

Research Area(s)

  • image motion analysis, motion estimation, pose estimation, shape measurement, Video signal processing