TY - GEN
T1 - Towards mobile embodied 3D avatar as telepresence vehicle
AU - Tokuda, Yutaka
AU - Hiyama, Atsushi
AU - Miura, Takahiro
AU - Tanikawa, Tomohiro
AU - Hirose, Michitaka
N1 - Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to [email protected].
PY - 2013
Y1 - 2013
N2 - In this paper, we present mobile embodied 3D avatar to shift a rich experience of avatar from a virtual world to our real life with a new style of telepresence. Conventional telepresence research have focused on the exact recreation of face-to-face communication at a fixed position in a specialized room, so there have been much less research on a life-sized mobile telepresence system despite many off-the-shelf mobile telepresence robots available. We propose various scalable holographic displays to visualize a life-sized avatar in an actual life. In addition, we introduce architecture to control embodied avatar according to user?s intention by extending popular architecture for a multimodal virtual human, namely SAIBA. Our primitive prototype system was tested with 5 simple avatar animations to embody with a wheeled platform robot and a lifesized transparent holographic display and proved realistic avatar?s movement complying user?s intention and the situation at the remote location of avatar. © 2013 Springer-Verlag Berlin Heidelberg.
AB - In this paper, we present mobile embodied 3D avatar to shift a rich experience of avatar from a virtual world to our real life with a new style of telepresence. Conventional telepresence research have focused on the exact recreation of face-to-face communication at a fixed position in a specialized room, so there have been much less research on a life-sized mobile telepresence system despite many off-the-shelf mobile telepresence robots available. We propose various scalable holographic displays to visualize a life-sized avatar in an actual life. In addition, we introduce architecture to control embodied avatar according to user?s intention by extending popular architecture for a multimodal virtual human, namely SAIBA. Our primitive prototype system was tested with 5 simple avatar animations to embody with a wheeled platform robot and a lifesized transparent holographic display and proved realistic avatar?s movement complying user?s intention and the situation at the remote location of avatar. © 2013 Springer-Verlag Berlin Heidelberg.
KW - avatar
KW - interaction techniques
KW - mobility
KW - multimodal interaction
KW - platforms and metaphors
KW - SAIBA
KW - telepresence
KW - telework
KW - transparent display
UR - http://www.scopus.com/inward/record.url?scp=84880753743&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-84880753743&origin=recordpage
U2 - 10.1007/978-3-642-39194-1_77
DO - 10.1007/978-3-642-39194-1_77
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9783642391934
VL - 8011 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 671
EP - 680
BT - Universal Access in Human-Computer Interaction: Applications and Services for Quality of Life - 7th International Conference, UAHCI 2013, Held as Part of HCI International 2013, Proceedings
PB - Springer Verlag
T2 - 7th International Conference on Universal Access in Human-Computer Interaction: Design Methods, Tools, and Interaction Techniques for eInclusion, UAHCI 2013, Held as Part of 15th International Conference on Human-Computer Interaction, HCI 2013
Y2 - 21 July 2013 through 26 July 2013
ER -