AMAE : Adaptive Motion-Agnostic Encoder for Event-Based Object Classification

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations

Related Research Unit(s)


Original languageEnglish
Article number9116961
Pages (from-to)4596-4603
Journal / PublicationIEEE Robotics and Automation Letters
Issue number3
Online published15 Jun 2020
Publication statusPublished - Jul 2020


Event cameras, with low power consumption, high temporal resolution, and high dynamic range, have been used increasingly in computer vision. These superior characteristics enable event cameras to perform low-energy and high-response object classification tasks in challenging scenarios. Nevertheless, specific encoding methods for event-based classification are required owing to the unconventional output of event cameras. Existing event-based encoding methods have focused on extracting semantic and motion information in event signals. However, two main problems exist in these methods: (i) the motion information of event signals leads to mispredictions by the classifiers. (ii) effective evaluation methods to validate the motion robustness of event-based classification models have yet to be proposed. In this work, we introduce an adaptive motion-agnostic encoder for event streams to address the first problem. The proposed encoder would allow us to extract indistinguishable semantic information from an object in different motion circumstances. In addition, we propose a novel motion inconsistency evaluation method to assess the motion robustness of the classification models. We apply our method to several benchmark datasets and evaluate it using motion consistency and inconsistency testing methods. Classification performance shows that our proposed encoder outperforms state-of-the-art methods by a large margin.

Research Area(s)

  • deep learning for visual perception, Object detection, recognition, segmentation and categor-ization