Sketch-based Shape and Structure Analysis in Design and Fabrication
在設計和製造中基於草圖的形狀和結構分析
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 14 Aug 2023 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(4e020692-46e2-4a31-8046-16b7dd1a515d).html |
---|---|
Other link(s) | Links |
Abstract
Sketching is a universal and intuitive tool for humans to render and interpret the visual world and is extensively used by designers in the product design and digital fabrication process. Since human viewers can easily envision the missing 3D information from a sparse, abstract, and imprecise sketch, they tend to use sketches to represent complex shapes based on notable advantages such as flexibility, concision, and efficiency. However, inferring the desired content from an input sketch or multi-view sketches is still highly challenging for machines due to the ill-posed nature. With the successful developments of deep learning techniques, such as implicit representation, image-to-image translation, and metric learning, we have more opportunities and general tools to solve challenging problems in sketch-based shape and structure analysis tasks. This dissertation explores the topic of sketch-based shape and structure analysis with the above advanced frameworks in three aspects: imperfect inputs, interaction with external physical factors, and multi-view inputs.
To enable people to perform the sketch-based shape and structural analysis in the design and fabrication process, we propose three deep learning-based techniques to assist users in three tasks, namely, beautifying imperfect freehand sketches, simulating shape structural stress for a single sketch under a user-specified force, and inferring correspondence from multi-view sketches.
Although sketches are widely studied and used in various sketch-based applications, existing algorithms are still struggling to directly make use of these freely drawn sketches that are usually drawn in an imprecise and abstract format, in particular, sketches created for depicting man-made objects with diverse geometry and non-trivial topology. We present a novel freehand sketch beautification method, which takes as input a freely drawn sketch of a man-made object and automatically beautifies it both geometrically and structurally. Beautifying a sketch is challenging because of its highly abstract and heavily diverse drawing manner. Existing methods are usually confined to the distribution of their limited training samples and thus cannot beautify freely drawn sketches with rich variations. To address this challenge, we adopt a divide-and-combine strategy. Specifically, we first parse an input sketch into semantic components, beautify individual components by a learned part beautification module based on part-level implicit manifolds, and then reassemble the beautified components through a structure beautification module. With this strategy, our beautification method can go beyond the training samples and handle novel freehand sketches by learning both the possible part geometries and the plausible combinations of individual components. We demonstrate the effectiveness of our sketch beautification system with extensive experiments and a perceptive study.
In the process of product design and digital fabrication, structural analysis of a designed prototype is a fundamental and essential step. However, such a step is usually invisible and agnostic to designers in the early sketching phase. This limits users’ ability to contemplate a shape’s physical properties and structural soundness. To bridge this gap, we present Sketch2Stress that allows users to perform structural analysis of desired objects at the sketching stage. Sketch2Stress takes as input a sketch and a point map to specify the location of a user-assigned external force. It automatically predicts a normal map and a corresponding structural stress map distributed over the user-sketched underlying object. In this way, Sketch2Stress empowers designers to easily examine the stress sustained everywhere and identify potential problematic regions over their sketched object. Furthermore, combined with the predicted normal map, users are able to conduct a region-wise structural analysis efficiently by aggregating the stress effects of multiple forces in the same direction. We demonstrate the effectiveness and practicality of Sketch2Stress system with extensive experiments and user studies.
The above two works focus on the analysis and processing of single-view sketches. However, interpreting missing 3D information from the single-view sketch only is still challenging for existing computer algorithms, especially the sketch-based shape reconstruction approaches. Therefore, multi-view inputs are often needed and used in the aforementioned algorithms to reduce the inherent ambiguity in single-view sketches and recover the underlying 3D geometry faithfully. The third technique aims at automatically computing the semantic shape correspondence among multi-view freehand sketches created for the same objects. Correspondence matching is a fundamental but still an open problem in the research community. This problem is more challenging for multi-view sketches since the visual features of corresponding points can be very sparse and vary significantly across different views. To solve this problem, we present SketchDesc to learn a novel local sketch descriptor from data. We further contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multi-scale strategy to learn the pixel-level correspondence among multi-view sketches. Through extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets, we demonstrate the effectiveness of SketchDesc.
To enable people to perform the sketch-based shape and structural analysis in the design and fabrication process, we propose three deep learning-based techniques to assist users in three tasks, namely, beautifying imperfect freehand sketches, simulating shape structural stress for a single sketch under a user-specified force, and inferring correspondence from multi-view sketches.
Although sketches are widely studied and used in various sketch-based applications, existing algorithms are still struggling to directly make use of these freely drawn sketches that are usually drawn in an imprecise and abstract format, in particular, sketches created for depicting man-made objects with diverse geometry and non-trivial topology. We present a novel freehand sketch beautification method, which takes as input a freely drawn sketch of a man-made object and automatically beautifies it both geometrically and structurally. Beautifying a sketch is challenging because of its highly abstract and heavily diverse drawing manner. Existing methods are usually confined to the distribution of their limited training samples and thus cannot beautify freely drawn sketches with rich variations. To address this challenge, we adopt a divide-and-combine strategy. Specifically, we first parse an input sketch into semantic components, beautify individual components by a learned part beautification module based on part-level implicit manifolds, and then reassemble the beautified components through a structure beautification module. With this strategy, our beautification method can go beyond the training samples and handle novel freehand sketches by learning both the possible part geometries and the plausible combinations of individual components. We demonstrate the effectiveness of our sketch beautification system with extensive experiments and a perceptive study.
In the process of product design and digital fabrication, structural analysis of a designed prototype is a fundamental and essential step. However, such a step is usually invisible and agnostic to designers in the early sketching phase. This limits users’ ability to contemplate a shape’s physical properties and structural soundness. To bridge this gap, we present Sketch2Stress that allows users to perform structural analysis of desired objects at the sketching stage. Sketch2Stress takes as input a sketch and a point map to specify the location of a user-assigned external force. It automatically predicts a normal map and a corresponding structural stress map distributed over the user-sketched underlying object. In this way, Sketch2Stress empowers designers to easily examine the stress sustained everywhere and identify potential problematic regions over their sketched object. Furthermore, combined with the predicted normal map, users are able to conduct a region-wise structural analysis efficiently by aggregating the stress effects of multiple forces in the same direction. We demonstrate the effectiveness and practicality of Sketch2Stress system with extensive experiments and user studies.
The above two works focus on the analysis and processing of single-view sketches. However, interpreting missing 3D information from the single-view sketch only is still challenging for existing computer algorithms, especially the sketch-based shape reconstruction approaches. Therefore, multi-view inputs are often needed and used in the aforementioned algorithms to reduce the inherent ambiguity in single-view sketches and recover the underlying 3D geometry faithfully. The third technique aims at automatically computing the semantic shape correspondence among multi-view freehand sketches created for the same objects. Correspondence matching is a fundamental but still an open problem in the research community. This problem is more challenging for multi-view sketches since the visual features of corresponding points can be very sparse and vary significantly across different views. To solve this problem, we present SketchDesc to learn a novel local sketch descriptor from data. We further contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multi-scale strategy to learn the pixel-level correspondence among multi-view sketches. Through extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets, we demonstrate the effectiveness of SketchDesc.