How machines see the world : Understanding image annotation

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)


Original languageEnglish
Journal / PublicationNECSUS. European journal of media studies
Issue numberSpring 2018_#Resolution
Online published31 Jul 2018
Publication statusPublished - 2018


Michael Baxandall, in Painting and Experience in 15th Century Italy (1988), shows the existence of a series of rules that painters of the fifteenth century were advised to follow. These "guidelines" explained, for example, how each different hand position painted, within that cultural context, represented a different concept. These rules were rather rich and detailed and helped the painter maintain relevance in that historical and cultural context. Today, companies such as Amazon or Facebook are trying to teach machines and algorithms to see and understand what they see (Image recognition). However, this process of signification, simple for a human being, is still complex for machines and algorithms. Hundreds of thousands of workers, therefore, are hired in order to label what they see. The workers are however paid in pennies per image labelled and work in precarious working conditions. This often leads to insufficient, poor or confusing labelling. Yet these "low quality" labels are completely determining the way machines and algorithms see and understand the world. What are the consequences of a learning process that is confused, inaccurate, and qualitatively poor, in this unprecedented historical moment where there are more machines than human beings analyzing, and trying to make sense of what they see?