Acessibilidade / Reportar erro

Explainable AI to Mitigate the Lack of Transparency and Legitimacy in Internet Moderation

ABSTRACT

The massive use of Artificial Intelligence in Content Moderation on the internet is a reality of our times. However, this raises a number of questions, such as whether the use of opaque automatic systems is pertinent, or even whether platforms alone can make decisions that used to be made by the State. In this context, the use of black box AI comes to be considered a threat to freedom of expression. On the other hand, keeping content that promotes virtual abuse is equally harmful to this fundamental right. In this scenario, this study summarizes the main problems pointed out by the literature regarding the current paradigm, evaluates the responses that new technologies bring, and proposes a path for a new moderation paradigm that is fair and ethical in which the State and social media platforms play a relevant role. That involves the adoption of Explainable AI associated with transparent and legitimate criteria defined by society.

KEYWORDS:
Digital humanities; Automatic content moderation; Explainable AI; Freedom of expression on the internet; Ethics in Artificial Intelligence

Instituto de Estudos Avançados da Universidade de São Paulo Rua da Reitoria,109 - Cidade Universitária, 05508-900 São Paulo SP - Brasil, Tel: (55 11) 3091-1675/3091-1676, Fax: (55 11) 3091-4306 - São Paulo - SP - Brazil
E-mail: estudosavancados@usp.br