Weak Disambiguation for Partial Structured Output Learning

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1258-1268
Journal / PublicationIEEE Transactions on Cybernetics
Volume52
Issue number2
Online published23 Jun 2020
Publication statusPublished - Feb 2022

Abstract

Existing disambiguation strategies for partial structured output learning just cannot generalize well to solve the problem that there are some candidates that can be false positive or similar to the ground-truth label. In this article, we propose a novel weak disambiguation for partial structured output learning (WD-PSL). First, a piecewise large margin formulation is generalized to partial structured output learning, which effectively avoids handling a large number of candidate-structured outputs for complex structures. Second, in the proposed weak disambiguation strategy, each candidate label is assigned with a confidence value indicating how likely it is the true label, which aims to reduce the negative effects of wrong ground-truth label assignment in the learning process. Then, two large margins are formulated to combine two types of constraints which are the disambiguation between candidates and noncandidates, and the weak disambiguation for candidates. In the framework of alternating optimization, a new 2n-slack variables cutting plane algorithm is developed to accelerate each iteration of optimization. The experimental results on several sequence labelling tasks of natural language processing show the effectiveness of the proposed model.

Research Area(s)

  • Cutting plane algorithm, partial structured output learning, piecewise large margin, weak disambiguation