Artificial Intelligence, Web Content Personalisation and Big Tech in Sub-Saharan Africa: A Human Rights-Based Approach to Development
撒哈拉以南非洲地區的人工智能、網絡內容個性化和大型科技: 基於人權的發展方針
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 20 Feb 2024 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(fe552a57-a007-45d6-99b2-7a8ee443220a).html |
---|---|
Other link(s) | Links |
Abstract
AI is a powerful tool used by both nations and powerful corporations to achieve disparate outcomes such as aiding law enforcement in spotting criminals within a large crowd. For some corporations, e.g., those in finance, AI is used in credit rating and determining if a client is likely to default, etc. Some big tech corporations such as Meta and Alphabet use AI tools to personalise content that one can access on platforms such Facebook, YouTube, and Google Search. All this is made possible by the AI’s algorithmic decision-making. In the case of Meta and Alphabet, these are powerful corporations both financially, and in terms of the amount of information they can process and make accessible to users globally. This power is reflected in these corporations’ forays into Sub-Saharan Africa (SSA) to make their services and the internet accessible, in the name of development.
The power big tech companies wield creates a petri-dish that fosters the violation of the right to receive information (RRI) by their platforms’ content personalisation AI. Based on this understanding, this research probes the various conceptual systems imbuing companies’ content personalisation AI that Facebook, YouTube, and Google Search use and how those conceptual systems facilitate the violation of RRI. It seeks to establish the challenges facing accountability measures in the current international human rights dispensation that make it difficult to hold big tech corporations liable for the violation of RRI.
Consequently, it proposes that since development is the higher ideal used to justify big tech companies’ forays into SSA, many do not question violations of RRI because internet connectivity is an essential social good. Therefore, this research proposes that the human rights-based approach to development is the framework that can help right the power imbalance, mandate, and hold big tech corporations accountable for the violation of RRI.
The power big tech companies wield creates a petri-dish that fosters the violation of the right to receive information (RRI) by their platforms’ content personalisation AI. Based on this understanding, this research probes the various conceptual systems imbuing companies’ content personalisation AI that Facebook, YouTube, and Google Search use and how those conceptual systems facilitate the violation of RRI. It seeks to establish the challenges facing accountability measures in the current international human rights dispensation that make it difficult to hold big tech corporations liable for the violation of RRI.
Consequently, it proposes that since development is the higher ideal used to justify big tech companies’ forays into SSA, many do not question violations of RRI because internet connectivity is an essential social good. Therefore, this research proposes that the human rights-based approach to development is the framework that can help right the power imbalance, mandate, and hold big tech corporations accountable for the violation of RRI.
- collaborative filtering, the Matthew effect and information dissemination, homophily and Facebook recommendations, epistemic paternalism, big tech in sub-saharan Africa, human rights based approach to development, outcasting as enforcement, meaningfulness and Facebook, violation of right to receive information