How machines see the world : understanding image labelling

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationArt Machines: International Symposium on Computational Media Art
EditorsRichard William Allen, Olli Tapio Leino, Malina Siu, Sureshika Piyasena
PublisherSchool of Creative Media, City University of Hong Kong
Pages104-105
ISBN (Print)978-962-442-421-8
Publication statusPublished - Jan 2019

Conference

TitleArt Machines
LocationCity University of Hong Kong
PlaceHong Kong
Period4 - 7 January 2019

Abstract

Michael Baxandall, in Painting and Experience in 15th Century Italy (1988), shows the existence of a series of rules that painters of the fifteenth century were advised to follow. These "guidelines" explained, for example, how each different hand position painted, within that cultural context, represented a different concept. These rules helped the painter maintain relevance in that historical and cultural context.

Today, companies all around the world are trying to teach Machines and Algorithms (M/A) to see and understand what they see (image recognition). However, this process of signification, simple for a human being, is still complex for M/A. Hundreds of thousands of workers, therefore, are hired, through crowdsourcing platform, in order to label what they see. An image of a house appears on the monitor and the worker then attributes the "house" label to that image. These images are then categorized by the received label, or semantic area, and then collected in databases which are used to train M/A.

However, this labelling process produces a series ofproblems. The workers are paid in pennies per image labelled and work inprecarious working conditions without any labor protection. Sometimes theannotators are required to label unknown scenes or objects (e.g., objects andtools in a physics laboratory) even when they lack the competence or knowledge.Moreover, if the employer considers their work unsatisfactory, payment can bedenied without any explanation. [4] All these different reasons often causeinsufficient and confusing labelling. Yet these "low quality" Labels aredetermining the way M/A understands the world.

Furthermore, every time we make a click on internet, on a social media we are not only conveying some information, but also engaging and establishing a pedagogical process. We are not only viewers and users, instead, we are teaching M/A how to look at the world.

Given this context, I would like to address some questions as follows: What are the consequences of a learning process that is confused, inaccurate, and qualitatively poor, in this unprecedented historical moment where there are more machines than human beings analyzing, and trying to make sense of what they see? What are the implications of this low quality work, which does not appear today as an image but instead as labelled data, which in turn contributes to fully defining the visual experience of these M/A?

Research Area(s)

  • Artificial Intelligence (AI), Visual arts, Machine Learning

Bibliographic Note

Information for this record is supplemented by the author(s) concerned.

Citation Format(s)

How machines see the world : understanding image labelling. / TRECCANI, Carloalberto.

Art Machines: International Symposium on Computational Media Art. ed. / Richard William Allen; Olli Tapio Leino; Malina Siu; Sureshika Piyasena. School of Creative Media, City University of Hong Kong, 2019. p. 104-105.

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review