Learning from Multi-annotator Data : A Noise-aware Classification Framework

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Article number26
Journal / PublicationACM Transactions on Information Systems
Issue number2
Online publishedFeb 2019
Publication statusPublished - Mar 2019


In the field of sentiment analysis and emotion detection in social media, or other tasks such as text classification involving supervised learning, researchers rely more heavily on large and accurate labelled training datasets. However, obtaining large-scale labelled datasets is time-consuming and high-quality labelled datasets are expensive and scarce. To deal with these problems, online crowdsourcing systems provide us an efficient way to accelerate the process of collecting training data via distributing the enormous tasks to various annotators to help create large amounts of labelled data at an affordable cost. Nowadays, these crowdsourcing platforms are heavily needed in dealing with social media text, since the social network platforms (e.g., Twitter) generate huge amounts of data in textual form everyday. However, people from different social and knowledge backgrounds have different views on various texts, which may lead to noisy labels. The existing noisy label aggregation/refinement algorithms mostly focus on aggregating labels from noisy annotations, which would not guarantee their effectiveness on the subsequent classification/ranking tasks. In this article, we propose a noise-aware classification framework that integrates the steps of noisy label aggregation and classification. The aggregated noisy crowd labels are fed into a classifier for training, while the predicted labels are employed as feedback for adjusting the parameters at the label aggregating stage. The classification framework is suitable for directly running on crowdsourcing datasets and applies to various kinds of classification algorithms. The feedback strategy makes it possible for us to find optimal parameters instead of using known data for parameter selection. Simulation experiments demonstrate that our method provide significant label aggregation performance for both binary and multiple classification tasks under various noisy environments. Experimenting on real-world data validates the feasibility of our framework in real noise data and helps us verify the reasonableness of the simulated experiment settings.

Research Area(s)

  • Crowdsourcing, Emotion detection, Sentiment analysis, Social media

Citation Format(s)

Learning from Multi-annotator Data: A Noise-aware Classification Framework. / ZHAN, Xueying; WANG, Yaowei; RAO, Yanghui et al.
In: ACM Transactions on Information Systems, Vol. 37, No. 2, 26, 03.2019.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review