Data-Driven 3D Interpretation of Freehand Drawings

Project: Research

View graph of relations


Sketching provides a natural way to depict complex shapes and express ideas. Varioustechniques have been proposed to support sketch-based 3D modeling, which has richapplications for example in the fields of education, engineering and design. It is wellknown that human perception relies on both visual rules and visual memory for sketchunderstanding. However, to date, sketch-based interfaces based on the concept of visualmemory has been largely unexplored, though it has a unique ability to infer highlydetailed models from simple, rough sketches.The very few existing solutions simply use a database of part or models, akin to visualmemory, and pose the problem as sketch-based shape retrieval. A user needs to sketchindividual components one by one, which are integrated into final models or scenes. Suchapproaches require either explicit or implicit sketch segmentation, enforcing artificialrestrictions on the user and thus disallowing the same freedom provided by pencil andpaper. A considerable amount of user effort is thus required to create models withcomplex structures. In addition, since they independently process individual sketchedparts or objects, their performance is highly sensitive to the quality of individual strokes,demanding frequent user intervention and disallowing a seamless sketching experience.In this project we will explore a new data-driven approach to semantic 3D interpretationof freehand drawings. We aim at an automatic part-assembly process with respect toinput hasty sketches, which depict a 3D object or a 3D scene of objects. We intend toplace as few constraints as possible on the way a user draws, for example, no restrictionson the drawing order of strokes or the segmentation of sketched objects. This requiresus to solve three challenging problems: segmenting freehand drawings into semanticallymeaningful components, recognizing the classes of individual components, andreconstructing the underlying shapes and structures. To address the significantlyincreased ambiguity level, we propose to use a repository of shapes composed ofsemantically segmented components with labels as visual memory, and solve theseinterdependent problems by heavily exploiting not only local shapes of individual partsbut also their relative geometric configurations exhibited in the shape repository.We will show that our technique will benefit various applications, which rely on sketch-basedshape retrieval and/or modeling. We believe that our work will take an importantstep towards human-like understanding of 3D shapes and structures in freehanddrawings or images.


Project number9042032
Grant typeGRF
Effective start/end date1/11/141/04/19

    Research areas

  • sketch-based modeling,sketch interpretation,data-driven,segmentation,