Precise Gaze Estimation for Mobile Gaze Trackers based on Hybrid Two-view Geometry

Dan Su*, You Fu Li, Yao Guo

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

2 Citations (Scopus)

Abstract

In this paper, we propose a novel calibration framework for the gaze estimation of mobile gaze tracking systems. In our method, the user's eye and the eye camera are modeled as a central catadioptric camera. Thus the epipolar geometry of the mobile gaze tracker can be described by the hybrid two-view geometry. To calibrate this model, the user is asked to gaze at the calibration points distributed in 3-D space but not all located on one plane. In the light of binocular training data, we apply a 3×6 local hybrid-fundamental matrix to register pupil centers with epipolar lines in the scene image. Thus the image gaze point viewed from different depths can be uniquely determined as the intersection of two epipolar lines calculated by binocular data. The simulation and experimental results show the effectiveness of our proposed calibration framework for mobile gaze trackers.
Original languageEnglish
Title of host publicationProceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)
PublisherIEEE
Pages302-307
ISBN (Electronic)9781538637425
ISBN (Print)9781538637418, 9781538637432
DOIs
Publication statusPublished - Dec 2017
Event2017 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2017) - The Parisian Macao, Macau SAR, China
Duration: 5 Dec 20178 Dec 2017
http://2017.ieee-robio.org/

Publication series

NameIEEE International Conference on Robotics and Biomimetics, ROBIO
PublisherIEEE

Conference

Conference2017 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2017)
PlaceChina
CityMacau SAR
Period5/12/178/12/17
Internet address

Fingerprint

Dive into the research topics of 'Precise Gaze Estimation for Mobile Gaze Trackers based on Hybrid Two-view Geometry'. Together they form a unique fingerprint.

Cite this