Abstract
Perceptual-based three-dimensional (3D) modeling tools have been proposed with the goal of assisting non-experts in creating 3D designs intuitively, with minimal prior knowledge required. In contrast to conventional computer-aided design (CAD) software, which may be overly intricate for novices to use, these tools enable users to rely on their perceptions to create and modify 3D models effectively. However, developing perceptual-based 3D modeling tools is a challenging field due to the varied and abstract nature of human perceptions. It typically requires extensive user studies and computational perceptual parsing of the 3D shapes. This thesis explores the topic of facilitating users' 3D modeling process with perceptions in three aspects: creating interactable 3D designs, measuring aesthetic quality of 3D shapes, and automating beautification of 3D shapes.
First, from the perspective of creating 3D designs, we propose an intuitive motion-guided interface for the modeling of interactable multi-functional furniture. While most non-expert 3D design systems focused on performing designs for static objects, we propose that 3D modeling interfaces can include more intuitive interactions between the user and the models that are dynamic (and can be interacted with). Therefore, we design and develop a motion-guided interface. Specifically, users can create interactable furniture components by the actions as if they were interacting with them with hands. For example, the user may rotate the cabinet model to create wheels for it. To explore users' preferred hand gestures for creating various dynamic furniture components, we conducted a preliminary user study. And then we implemented a 3D modeling system with users' preferred gestures as a basis for our motion-guided interface. The evaluation user study demonstrates that our motion-guided interface is user-friendly and efficient for novice designers to perform conceptual furniture designs.
Second, from the perspective of measuring the quality of existing 3D shapes, we propose a novel learning-based 3D aesthetics assessment. While previous works computed the visual aesthetics of 3D shapes "globally'', we propose a framework to learn both a "global'' shape aesthetics measure that computes aesthetics scores for whole 3D shapes, and a "local'' shape aesthetics measure that computes to what extent a local region on the 3D shape surface contributes to the whole shape's aesthetics. These aesthetics measures are learned, and hence do not consider existing handcrafted notions of what makes a 3D shape aesthetic. To learn the aesthetics measures, we take a dataset of global pairwise shape aesthetics, where humans compare between pairs of shapes and say which shape from each pair is more aesthetic. We then propose a point-based neural network that takes a 3D shape represented by surface patches as input and jointly outputs its global aesthetics score and a local aesthetics map. To build connections between global and local aesthetics, we embed the global and local features into the same latent space and then output scores with the weights-shared aesthetics predictors. Furthermore, we design three loss functions to supervise the training jointly. We demonstrate the shape aesthetics results globally and locally to show that our framework can make good global aesthetics predictions while the predicted aesthetics maps are consistent with human perception. Additionally, our local aesthetics maps enable the automatic establishment of aesthetic-revealing patch galleries and aesthetic-driven sub-part dataset, which can act as the inspiration and references for novice designers.
Third, from the perspective of editing existing 3D shapes, we propose a framework to automatically enhance the aesthetics of general 3D shapes. While previous automated beautification of 3D shapes has been limited to specific shapes such as 3D face models, our beautification framework can be applied to various man-made objects based on a reference-based beautification strategy. We first perform data collection to gather the aesthetics ratings of various 3D shapes to create a 3D shape aesthetics dataset. Then we perform reference-based editing to edit the input shape and beautify it by making it look more like some reference shape that is aesthetic. Specifically, we propose a reference-guided global deformation framework to coherently deform the input shape such that its structural proportions will be closer to those of the reference shape. We then optionally transplant some local aesthetic parts from the reference to the input to obtain the beautified output shapes. Comparisons show that our reference-guided 3D deformation algorithm outperforms existing techniques. Furthermore, quantitative and qualitative evaluations demonstrate that the performance of our aesthetics enhancement framework is consistent with both human perception and existing 3D shape aesthetics assessment.
Our three works novelly integrate motions and aesthetics into the traditional modeling process. They are interconnected, eventually allowing non-experts to create visually pleasing models intuitively and efficiently.
| Date of Award | 9 Sept 2024 |
|---|---|
| Original language | English |
| Awarding Institution |
|
| Supervisor | Chung Man Manfred LAU (Supervisor) |