Sketching Interfaces with Structure and Context Cues

結合結構與上下文信息的草圖介面

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date20 Jul 2021

Abstract

Sketching has been one of the most accessible means for idea communication and artistic expression throughout human history. Nowadays, sketch-based interface, as an intuitive high-level interaction approach, has benefited multiple tasks, such as image retrieval, digital painting, and information conveyance. A core part of the interface design and algorithm design of these applications is sketch interpretation. Extensive research efforts have focused on interpreting a singular sketch, such as an illustrative sketched object or individual strokes. This thesis, however, focuses on interpreting a group of strokes or sketched objects and utilizing the structural and contextual relationship within them to support sketch-related tasks. In particular, we present three applications of sketch-based interfaces that benefit from these cues.

First, we propose to use the structural relationship among strokes to assist in decorative pattern retrieval. Compared with keyword searching, sketch-based image retrieval methods can represent patterns’ appearance, but the representation is limited to the lowest geometrical level, disregarding the structural features of patterns. Inspired by how designers compose shape lines and construction lines together to demonstrate their ideas, we propose a multi-level sketch-based interface for pattern retrieval. It has four brush tools for users to draw low-level geometrical features and high-level structural features, including reflection, rotation, and translation symmetries. Our system then leverages the strokes’ information and their structural relationship for searching patterns.

Second, we present a system to help users autocomplete repetitive strokes during image-guided drawing by exploiting the contextual relationship among users’ input strokes concerning a reference image. Drawing often involves many stroke repetitions, which are tremendous in the amount and might vary in their styles or arrangements. To alleviate users’ input workload, we present an autocomplete interface. Our system analyzes the contextual relationship within stroke history during an interactive drawing process to identify possible repetitions and infers their relationship with the reference image. Our system then suggests potential repetitive strokes that the users are likely to draw. Users can either accept or modify the suggestions and thus avoid lots of manual inputs or ignore the suggestions and continue drawing.

Third, we present a sketch classification framework using the contextual relationship among sketched objects. Most prior methods perform sketch classification by considering individually sketched objects and often fail to identify their correct categories due to the highly abstract nature of sketches. For a sketched scene containing multiple objects, we propose classifying a sketched object by considering its surrounding context in the scene, which provides vital cues for alleviating its recognition ambiguity. We learn such contextual knowledge from a database of scene images and show its application on both incremental sketch classification and sketch co-classification.

Each part of the thesis shows the utility of the proposed methods through various evaluations and results. We hope to contribute to the research of sketch-based interfaces by exploring different sketch interpretations and designing accessible and practical sketch-based applications for users.