3D Content and Interaction Prototyping with Mobile Augmented Reality

利用移動端擴增實境的三維內容和交互原型設計技術

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date28 Jun 2022

Abstract

With the advance and spread of three-dimensional (3D) technologies, content creators have produced a large volume of 3D contents in the 3D world. These 3D virtual contents are in either a static form (e.g., 3D sketching) or a dynamic form (e.g., 3D animation). As an intermediate, 3D interaction bridges users and 3D contents from 3D user interfaces. It involves multifarious interaction schemes among humans, 3D contents, and surrounding environments. With the popularity of ubiquitous devices, people have higher needs to create and interact with 3D contents and interactions in situ based on physical environments. However, how to design 3D contents and interactions that are closely related to a real-world environment remains underexplored. Recently, mobile augmented reality (AR) technologies provide a flexible way to achieve early-stage designs by overlaying digital contents on real objects or environments. However, there still exist new issues in mobile AR prototyping, and few mobile AR tools and techniques are proposed for freely prototyping 3D contents and interactions, especially in close interaction with real-world environments. This thesis explores fundamental issues of 3D content and interaction prototyping with mobile AR and presents novel mobile AR prototyping techniques for 3D contents and interactions.

I first study the fundamental issue of using an AR-enabled mobile phone as a 3D pen to create 3D virtual curves. Users usually utilize 3D curves to depict initial design ideas for content and interaction prototyping. The recent advance in motion tracking (e.g., Visual Inertial Odometry) allows the use of a mobile phone as a 3D pen, thus significantly benefiting various mobile AR applications based on 3D curve creation. However, when creating 3D curves on and around physical objects with mobile AR, tracking might be less robust or even lost due to camera occlusion or textureless scenes. This motivates me to study how to achieve natural interaction with minimum tracking errors during close interaction between a mobile phone and physical objects. To this end, I contribute an elicitation study on input point and phone grip and a quantitative study on tracking errors. Based on the results, I present a system for direct 3D drawing with an AR-enabled mobile phone as a 3D pen and a mobile AR interface for interactive correction of 3D curves with tracking errors in mobile AR. I demonstrate the usefulness and effectiveness of the proposed 3D curve creation system for two applications: insitu 3D drawing and direct 3D measurement.

Based on the fundamental explorations above, I further explore the 3D character animation creation for content prototyping with mobile AR. 3D character animation is a paradigm commonly exploited in content prototyping. Users usually demonstrate dynamic user interactions by involving animated 3D characters for content prototyping. To validate ideas for realistic usage scenarios, creating animated virtual characters closely interacting with real-world environments is necessary but difficult. Existing systems adopt video see-through approaches to indirectly control a virtual character in mobile AR, making close interaction with real environments not intuitive. Instead, I explore to use an AR-enabled mobile device to directly control the position and motion of a virtual character situated in a real environment. I conduct two respective elicitation studies to elicit user-defined motions of a virtual character interacting with real environments and a set of user-defined motion gestures describing specific character motions. I find that an SVM-based learning approach achieves reasonably high accuracy for gesture classification from the motion data of a mobile device. Based on the findings, I present ARAnimator, which allows both novice/casual and professional animation users to directly represent a virtual character by an AR-enabled mobile phone and control its animation in AR scenes using motion gestures of the device, followed by animation preview and interactive editing through a video see-through interface. The experimental results show that with ARAnimator, users can easily create in-situ character animations closely interacting with different real environments. The created animation results can be directly applied to show and iterate interactive patterns between target users and surrounding environments.

Besides the mobile AR system of 3D character animation creation for content prototyping, I explore interaction prototyping for full-body physical interactions with mobile AR. Real-world IoT enhanced spaces involve diverse proximity- and gesture-based interactions between users and IoT devices/objects. Prototyping such interactions benefits various applications like the conceptual design of ubicomp space. Existing prototyping techniques often require specialized hardware and coding skills to demonstrate interactive behaviors, or lack the connection between created prototypes and real scenes. To avoid using complex hardware setup or programming skills and increase prototyping embodiment, researchers have explored interaction prototyping in AR scenes. However, existing AR interaction prototyping approaches have focused on prototyping situated experiences or context-aware interactions from the first-person view instead of full-body proxemic and gestural (pro-ges for short) interactions of real users in the real world. I conduct interviews to figure out the challenges of prototyping pro-ges interactions in real-world IoT enhanced spaces. Based on the findings, I present ProGesAR, a mobile AR tool for prototyping pro-ges interactions of a subject in a real environment from a third-person view and examining the prototyped interactions from both the first- and third- person views. The proposed interface supports the effects of virtual assets dynamically triggered by a single subject, with the triggering events based on four features: location, orientation, gesture, and distance. I conduct a preliminary study by inviting participants to prototype in a freeform manner using ProGesAR. The early-stage findings show that with ProGesAR, users can easily and quickly prototype their design ideas about pro-ges interactions.