Towards Bridging the Gap Between Freehand Sketches and 3D Models

Project: Research

View graph of relations


Due to their rich expressiveness with easy inputs, freehand sketches have been extensively used for retrieving or constructing 3D models. On the other hand existing 3D models can also be used for semantic interpretation of sketches. These tasks are challenging since freehand sketches and 3D models have significantly different shape representations in different dimensions. One common approach to this problem is to algorithmically render 3D models as 2D line drawings (containing silhouette lines, geometric ridges etc.) under specific viewpoints. However, due to various levels of abstraction, different art styles, and shape/scale distortions, human sketches are not particularly similar to algorithmically rendered line drawings of 3D models. The gap caused by the differences of domain, dimension, viewpoint and abstraction, significantly limits the performance of the existing algorithms that require casual sketches as input.In this project we aim to bridge the gap between freehand sketches and 3D models, by analyzing how non-artist users casually sketch 3D objects. A related problem has been explored in [Cole et al. 2009], which, however, focuses on studying where skilled artists carefully draw lines with respect to a given rendered 3D model in an observational drawing setup. Although their work provides interesting findings bene ting automatic line drawing algorithms, their dataset and findings are not very useful for many applications requiring more casual sketch inputs. Via crowdsourcing we will thus construct the first- ever large-scale collection of freehand sketches that are roughly aligned with 3D models. Given the limited budget we have to carefully select 3D models and viewpoints so that the sketch-model pairs in the resulting dataset are informative for various applications. To better understand the relationship between freehand sketches and 3D models, we will perform sketch analysis both spatially and temporally, at the levels of stroke and part.Our research will have significant impacts on non-photorealistic rendering and applications that take casual sketches as input. Scientifically understanding the relationship between sketches and 3D models enables feasible solutions to a problem of example-based sketch synthesis for 3D objects of general categories, a challenging and unexplored non- photorealistic rendering problem. Example-based sketch synthesis not only allows the abstraction of 3D models beyond contour line drawings but also serves as an effective data augmentation method for various applications based on deep learning. In this project we will study multiple practical applications that involve ne-grained sketch understanding, including ne-grained sketch-based shape retrieval, semantic sketch segmentation and labeling, sketch beautification, sketch-to-model, and model-to-sketch tasks. We will make the large-scale collection of sketch-model pairs as well as the analysis and generation tools publicly available, hoping to create new opportunities for sketch-model analysis and synthesis. 


Project number9042894
Grant typeGRF
Effective start/end date1/11/19 → …