Acessibilidade / Reportar erro

The proposal of the european regulation on artificial intelligence

A proposta europeia de regulação da inteligência artificial

Abstract

On 21 April 2021, the European Commission published its proposal for regulating AI (Artificial Intelligence Act - AIA. It is a pioneering legislative proposal that will undoubtedly influence other countries to follow the same path. This study aims to highlight the fundamental characteristics of the AIA to enable an overview and optimize aspects of the law. We used the descriptive exploratory method, based on bibliographic research, where we sought, from the database of the “Portal Periódicos Capes/MEC” and “Google Scholar”, to make a comprehensive research of scientific articles that brought in their titles the terms Artificial Intelligence, Artificial Intelligence Act, AI regulation, data governance, hard regulation, and soft regulation. From the literature review analysis, it can be seen that the European Commission has made an option for hard regulation and that the AIA proposes a risk-based approach, imposing regulatory burdens only when an AI system is likely to pose high risks to fundamental rights. It was also concluded that the AIA seeks to prevent the suffocation of technology by allowing the development of an AI ecosystem in the European Union.

Keywords:
Artificial Intelligence; Artificial Intelligence Act; Data governance; Hard regulation; Soft regulation

Resumo

Em 21 de abril de 2021, a Comissão Europeia publicou sua proposta de regulação de IA (Artificial Intelligence Act - AIA). Trata-se de proposta legislativa pioneira que certamente influenciará os demais países a seguirem o mesmo caminho. O objetivo do presente estudo é destacar as características fundamentais do AIA a fim de possibilitar uma visão geral e otimizada dos principais aspectos da lei. Utilizou-se do método exploratório descritivo, com suporte em pesquisa bibliográfica, onde se buscou, a partir da base de dados do “Portal Periódicos Capes/MEC e do “Google Scholar”, fazer uma ampla pesquisa de artigos científicos que trouxessem em seus títulos os termos Inteligência Artificial, Artificial Intelligence Act, regulação de IA, governança de dados, regulação dura e regulação suave. A partir da análise da revisão bibliográfica, pode-se verificar que a Comissão Europeia fez uma opção por um regulação dura e que o AIA propõe uma abordagem baseada em riscos, impondo encargos regulatórios apenas quando um sistema de IA é suscetível de representar altos riscos aos direitos fundamentais. Concluiu-se, também, que o AIA busca evitar o sufocamento da tecnologia, permitindo que se crie um ecossistema de IA na União Europeia.

Palavras-chaves:
Inteligência Artificial; Artificial Intelligence Act; Governança de dados; Regulação dura; Regulação suave

1. INTRODUCTION

The potential increase in the capabilities of Artificial Intelligence systems and their endless applications in all fields of society has raised several governance concerns. To ensure the reliable use of technology, it is necessary to create new policies and rules that preserve fundamental and ethical rights. The area of governance in AI is absolutely new, and the number of government institutions dedicated exclusively to studying and supporting its regulatory initiatives is still minimal. However, governments worldwide are beginning to worry and create institutions and regulations capable of greater control of this technology’s use.

It seems peaceful to understand the need to create regulatory legislation for AI. The question is to what extent regulation interferes with the development of technology. There are heated discussions about whether the best would be a soft regulation based on guidelines, principles, and self-regulation or whether the path is to go to a hard regulation, with binding legal force from the entry into force.

The AI regulatory proposal, presented by the European Commission in April 2021, is an example of tough regulation that seeks to set limits on the use of high-risk AI within the EU or to affect the countries of the bloc. Creating an efficient regulatory system that protects citizens but does not scare off the development of technology is a significant challenge for governments. However, sticking to whether the European regulatory option is the best is premature. Indeed, the discussion about which regulatory model is appropriate for AI is only at the beginning.

This work aims to analyze the Regulatory Law of AI of the European Union, highlighting its main aspects. Based on state of the art, it will be essential to define a concept of Artificial Intelligence, given that it is the object to be regulated. Furthermore, it will also be of fundamental importance to ask about the possible models of regulation that were available to the European Parliament in return for further discussing and giving an opinion on the correcting model adopted.

Based on this objective, the article begins with the conceptualization of AI, analyzing its main uses and advantages for the citizen. Then, the risks inherent to the use of AI are clearly brought, highlighting the opacity of its decision-making processes, biases of discrimination, adverse effects on democratic processes and mass surveillance.

In the second moment of the work, two models of regulatory approach are presented. The first, the tough regulatory approach, seeks to create a regulatory framework for AI problems, with mandatory compliance from its validity; and the second, the smooth regulatory approach, which is based on recommendations, statements, manifests, or proposals and has no binding or coercive force of the law. This topic presents the main features and advantages of one approach and another.

Finally, the draft Law’s main characteristics that seek to regulate AI in the European Union (AIA) are discussed. In general terms, the AIA is based on a risk structure, each defining requirements and obligations to AI system providers and users and identifying potential risks to health, safety, and fundamental rights. The Act groups AI systems into three levels of risk: AI systems that pose an unacceptable risk, high risk and little or no risk.

2. ARTIFICIAL INTELLIGENCE, CONCEPT, ADVANTAGES, AND RISKS OF ITS USE

There is not a single accepted definition of artificial intelligence. For this work, the concept established by the High-Level Expert Group on Artificial Intelligence (AI HLEGAI HLEG - High-Level Expert Group on Artificial Intelligence. A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines. Brussels. 2019. Disponível em Disponível em https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines . Acesso em: 07 out. 2022.
https://digital-strategy.ec.europa.eu/en...
), appointed by the European Commission in 2018EUROPEAN COMMISSION. Ethics guidelines for trustworthy AI. 2019. Disponível em: Disponível em: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai . Acesso em: 28 set. 2022.
https://digital-strategy.ec.europa.eu/en...
, will be used. In the document prepared by the study group, AI is conceptualized as software systems (and possibly also hardware) designed by human beings who, given a complex objective, act in the physical or digital dimension, perceiving their environment through data acquisition, interpreting the structured or unstructured data collected, rationalizing the knowledge or processing of information, derived from this data, and deciding the best actions to be taken to achieve the given goal. AI systems can use symbolic rules or learn a numerical model, and they can also adapt their behavior by observing how the environment is affected by their previous actions (AI - HLEG, 2019). AI is able to acquire, process and interpret large amounts of data, make decisions based on interpreted data, and translate those decisions into action (Samoili et al., 2020SAMOILI, Sofia et al. AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence. 2020. Disponível em: Disponível em: https://eprints.ugd.edu.mk/28047/ . Acesso em: 29 set. 2022.
https://eprints.ugd.edu.mk/28047/...
).

It is possible to affirm that artificial intelligence (AI) currently has applications in almost all areas of society. For example, image recognition is used for diagnosis and prognosis in health. Much research on AI in the area of health has shown promise to increase the quality of services. However, there is a need for further theoretical advances and interventions that cover all levels and operations throughout the system (Sana; Choudhury, 2021ASAN, Onur; CHOUDHURY, Avishek. Research trends in artificial intelligence applications in human factors health care: mapping review. JMIR human factors, v. 8, n. 2, 2021. Disponível em: Disponível em: https://humanfactors.jmir.org/2021/2/e28236 . Acesso em: 26 set. 2022.
https://humanfactors.jmir.org/2021/2/e28...
).

Another area that has benefited from AI study is the transportation sector. Automated vehicle AI systems are powered by big data from various sensing devices and advanced computing capabilities. Thus, AI has become an essential component of automated vehicles to perceive the surrounding environment and therefore make the appropriate decision in motion (MA et al., 2020MA, Yifang et al. Artificial intelligence applications in the development of autonomous vehicles: a survey. IEEE/CAA Journal of Automatica Sinica, v. 7, n. 2, p. 315-329, 2020. https://ieeexplore.ieee.org/abstract/document/9016391. Acesso em: 26 set. 2022.
https://ieeexplore.ieee.org/abstract/doc...
).

The efficiency of AI still makes life easier in several other situations, such as the functioning of social media, internet search engines, and media services such as music and video, among others. One of the impacts it has generated in favor of the people is the use of the reference in relation to the specific location, which is considered a human right regarding geolocation, being security obtained by the application of artificial intelligence.

It can be said that AI is the fastest-growing deep tech in the world, with enormous potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life. Deep techs are advanced and complex technologies that require high levels of scientificity and, as a result, require long periods to develop before disseminating their use. They seek to solve problems that afflict humanity, such as the search for the cure of a disease or the development of a vaccine or, as is the case with AI, develop a technology that transforms and facilitates people’s lives (Ferreira; Vardi, 2021FERREIRA, Rodrigo; VARDI, Moshe Y. Deep tech ethics: an approach to teaching social justice in computer science. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 2021. p. 1041-1047. Disponível em: Disponível em: https://dl.acm.org/doi/abs/10.1145/3408877.3432449 . Acesso em: 29 set. 2022.
https://dl.acm.org/doi/abs/10.1145/34088...
).

AI is also considered a general-purpose technology (TPG), given its transformative potential and possible application in various sectors of the economy (Bresnahan; Trajtenberg, 1995BRESNAHAN, Timothy F.; TRAJTENBERG, Manuel. General purpose technologies’ Engines of growth’? Journal of Econometrics, v. 65, n. 1, p. 83-108, 1995. Disponível em: Disponível em: https://www.sciencedirect.com/science/article/abs/pii/030440769401598T . Acesso em: 12 out. 2022.
https://www.sciencedirect.com/science/ar...
). AI makes predictions and decisions and performs tasks usually performed by humans, and this executive capacity is a vital technology differential (United Kingdon Government, 2021UNITED KINGDON GOVERNMENT. National AI Strategy. 2021. Disponível em: Disponível em: https://www.gov.uk/government/publications/national-ai-strategy . Acesso em: 14 set. 2022.
https://www.gov.uk/government/publicatio...
).

Considering mainly the characteristic of AI of collecting and processing large amounts of data, increasing its power of human observation, there is a constant concern about privacy (Manheim, Karl; Kaplan, 2019MANHEIM, Karl; KAPLAN, Lyric. Artificial intelligence: Risks to privacy and democracy. Yale JL & Tech., v. 21, p. 106, 2019. Disponível em: Disponível em: https://heinonline.org/HOL/LandingPage?handle=hein.journals/yjolt21÷=4&id=&page= . Acesso em: 29 set. 2022.
https://heinonline.org/HOL/LandingPage?h...
). Through the connectivity of many AI systems and analyzing large amounts of data, and identifying links between them, AI can be used to make them no longer anonymous large data sets, although such datasets do not include personal data by themselves (Curzon et al., 2021CURZON, James; et all. “Privacy and Artificial Intelligence. IEEE Transactions on Artificial Intelligence 2.2, 2021. p. 96-108. Disponível em: Disponível em: https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9450036 . Acesso em: 29 set. 2022.
https://ieeexplore-ieee-org.ez433.period...
)

A process of obtaining that information through algorithms that can be understood as one that has been programmed to establish a solution to the problem, from determining the steps to resolve the situation, this being a way of how the criteria are constituted to reach that information that can be obtained: Direct way is the person who provides the information, or when the information is retrieved through organizations or institutions, causing the decision to culminate in determining the service or the need that is required (Mercader Uguina, 2022MERCADER UGUINA, Jesús R., Algoritmos e inteligencia artificial en el derecho digital del trabajo. Valencia: Tirant lo Blanch, 2022.).

Still, based on ai’s self-learning ability and, therefore, its increasing autonomy, along with AI’s enhanced ability to quickly learn and explore decision paths that humans may not have thought of AI can find correlation patterns within datasets without necessarily making a statement about causality (Ufert, 2020UFERT, Fabienne. AI regulation through the lens of fundamental rights: How well does the GDPR address the challenges posed by AI? European Papers-A Journal on Law and Integration, v. 2020, n. 2, p. 1087-1097, 2020. Disponível em: Disponível em: https://www.europeanpapers.eu/it/europeanforum/ai-regulation-through-the-lens-of-fundamental-rights . Acesso em 15 set. 2022.
https://www.europeanpapers.eu/it/europea...
). Consequently, AI can produce new solutions that may be impossible for humans to understand by making decisions without the reasons being known, potentially resulting in AI opacity. This opacity dramatically reduces the ability to explain AI. Furthermore, the training data of AI systems can be biased, leading to AI systems producing discriminatory results (Hutson, 2021HUTSON, Matthew. The Opacity of Artificial Intelligence Makes It Hard to Tell When Decision-making Is Biased. IEEE Spectrum. 58.2, 2021. p. 40-45. Disponível em: Disponível em: https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9340114 . Acesso em: 30 set. 2022.
https://ieeexplore-ieee-org.ez433.period...
).

The procedural dimension of the principle of isonomy implies the ability to account for the decisions made by AI systems and the people who manage them. To do this, it must be possible to identify the entity that made the decision and the process that triggered the decision-making act. Therefore, they need to have the characteristics of being transparent, that the capabilities and purpose of AI systems are openly communicated, and that decisions can be explained to directly or indirectly affected parties. Without this information, it is impossible to properly challenge a decision (Hutson, 2021HUTSON, Matthew. The Opacity of Artificial Intelligence Makes It Hard to Tell When Decision-making Is Biased. IEEE Spectrum. 58.2, 2021. p. 40-45. Disponível em: Disponível em: https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9340114 . Acesso em: 30 set. 2022.
https://ieeexplore-ieee-org.ez433.period...
).

Cases that are impossible to explain the decision-making process are called black box algorithms and require special attention. In such circumstances, it may be necessary to take other measures related to demonstrating ability (e.g., traceability, accountability, and transparent communication on system performance), provided that the system as whole respects fundamental rights (Bravo, 2020SÁNCHEZ BRAVO, Álvaro Avelino. Marco Europeo para una inteligencia artificial basada en las personas. International Journal of Digital Law , Belo Horizonte, ano 1, n. 1, p. 65-78, jan./abr. 2020. Disponível em: Disponível em: https://journal.nuped.com.br/index.php/revista/article/view/sanchezbravov1n1/262 . Acesso em: 30 set. 2022.
https://journal.nuped.com.br/index.php/r...
).

AI can perform functions that previously could only be performed by humans. For this reason, citizens are increasingly subject to decisions made by AI or with their assistance, which are sometimes difficult to understand and challenge effectively if necessary. In this sense, the use of AI can pose a risk to society and individuals because, as in humans, decisions made by AI can also represent discrimination, violation of privacy, adverse effects on democratic processes and mass surveillance (Hedlund, 2022HEDLUND, Maria. Distribution of forward-looking responsibility in the EU process on AI regulation. Frontiers in Human Dynamics, 2022. Disponível em: Disponível em: https://www.researchgate.net/publication/359895608_Distribution_of_Forward-Looking_Responsibility_in_the_EU_Process_on_AI_Regulation . Acesso em: 16 set. 2022.
https://www.researchgate.net/publication...
).

AI increases the ability to analyze people’s daily habits, which can cause a risk that it can be used to violate data protection rules. Furthermore, by analyzing large amounts of data and identifying links between them, AI can also be used to reconsider and publish data about people, creating new risks to protecting personal data (Poscher, 2021POSCHER, Ralf. Artificial intelligence and the right to data protection. Max Planck Institute for the Study of Crime, Security and Law Working Paper, n. 2021/03, 2021. Disponível em: Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769159 . Acesso em: 04 out. 2022.
https://papers.ssrn.com/sol3/papers.cfm?...
).

AI can also create security risks for consumers when integrated with new products and services (Divino, 2021DIVINO, Sthéfano Bruno Santos. Desafios e benefícios da inteligência artificial para o direito do consumidor. Revista Brasileira De Políticas Públicas, 2021, Vol.11 (1). Disponível em: Disponível em: https://www.publicacoes.uniceub.br/RBPP/article/view/6669 . Acesso em: 05 out. 2022.
https://www.publicacoes.uniceub.br/RBPP/...
). For example, in the case of an autonomous vehicle, the technology may fail to recognize an object, causing an accident with material and physical damage. These risks may arise from failures in the design of the technology or be related to data availability and quality issues. To provide greater legal certainty for users and developers, there must be precise regulation that addresses these risks and protects fundamental rights and guarantees, as legal uncertainty caused by the regulatory vacuum can undermine the use of technology and thereby cause irreversible harm to companies investing in AI (European Commission, 2020EUROPEAN COMMISSION. White paper on artificial intelligence: a European approach to excellence and trust. Brussels. 2020. Disponível em: Disponível em: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en . Acesso em: 01 out. 2021.
https://ec.europa.eu/info/publications/w...
).

While AI creates meaningful solutions to solve severe human problems, such as AI used to improve health services, it is also important to highlight that the solutions that are produced through data reading often carry prejudices and subjectivity, as they end up representing the feeling of their creators and the data with which they are fed (Rejmaniak, 2021REJMANIAK, Rafal. Bias in Artificial Intelligence Systems. Bialostockie Studia Prawnicze, v. 26, p. 25, 2021. Disponível em: Disponível em: https://heinonline.org/HOL/LandingPage?handle=hein.journals/bialspw26÷=36&id=&page= . Acesso em: 07 out. 2022.
https://heinonline.org/HOL/LandingPage?h...
).

One of the significant problems with the use of AI is gender-based discrimination. It is explained, in part, by the reason that today technology giants such as Meta and Google have less than 15% of women working for them. In addition, a significant percentage of women are not represented in the AI developer community. There is also no data on transgender workers. An important fact is that only 13.8% of the authors of scientific studies on AI are women, which also reinforces gender exclusion stereotypes and biases (Nwafor, 2021NWAFOR, Ifeoma Elizabeth. AI ethical bias: a case for AI vigilantism (AIlantism) in shaping the regulation of AI. International Journal of Law and Information Technology, v. 29, n. 3, p. 225-240, 2021. Disponível em: Disponível em: https://academic.oup.com/ijlit/article/29/3/225/6389717?login=true . Acesso em: 15 set. 2022.
https://academic.oup.com/ijlit/article/2...
).

For example, it is possible to verify this in machine translation tools. Google translate is the most widely used tool for text translation, being able to translate more than a hundred languages. A precedent study found that “google translate” converted gender-neutral pronouns into stereotyped gender pronouns. It is because machines learn word associations from written texts and that these associations reflect this stereotyped and excluding content. By translating phrases from Hungarian, which is a gender-neutral language, into English, relating to professions traditionally related to men, such as scientist, engineer and CEO, the phrases are translated into the male gender, while phrases containing professions such as nurse, baker and organizer of parties and weddings, the application translates and interprets as female (Prates; Avelar; Lamb, 2020PRATES, Marcelo OR; AVELAR, Pedro H.; LAMB, Luís C. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, v. 32, n. 10, p. 6363-6381, 2020. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s00521-019-04144-6 . Acesso em: 07 out. 2022.
https://link.springer.com/article/10.100...
).

Therefore, because of its characteristics, AI can seriously affect fundamental rights such as freedom of expression, freedom of assembly, human dignity, non-discrimination by gender, racial or ethnic origin, religion or belief, disability, age or sexual orientation, protection of personal data and privacy, or the right to an effective judicial remedy and an impartial court and consumer protection (European Commission, 2020). And there is also the issue of inequality in access to technologies in general (Cristóvam, Schiefler and Sousa, 2020CRISTÓVAM, José Sérgio da Silva; SCHIEFLER, Eduardo André Carvalho; SOUSA, Thanderson Pereira de. Administração pública digital e a problemática da desigualdade no acesso à tecnologia. International Journal of Digital Law, Belo Horizonte, ano 1, n. 2, p. 97-116, maio/ago. 2020. Disponível em: Disponível em: https://journal.nuped.com.br/index.php/revista/article/view/schiefler2020 Acesso em: 17 out. 2022.
https://journal.nuped.com.br/index.php/r...
) or, specifically, the problem of the digital invisibility of vulnerable groups (Reyna, Gabardo and Santos, 2020REYNA, Justo; GABARDO, Emerson; SANTOS, Fábio de Sousa. Governo eletrônico, invisibilidade digital e direitos fundamentais. Sequência (Florianópolis), n. 85, p. 30-50, ago. 2020. Disponível em: Disponível em: https://periodicos.ufsc.br/index.php/sequencia/article/view/75278 Acesso em: 17 out. 2022.
https://periodicos.ufsc.br/index.php/seq...
).

The issue of inclusion is a factor of great relevance within the processes of creating algorithms. Through platforms and technological networks, the use of language has been modified towards relationships that become interpersonal. Thus, although AI brings numerous advantages to the private and public sectors, the great challenge of its use is to be able to reconcile all this technology with the regular and satisfactory exercise of fundamental rights whenever the citizen is confronted with informational asymmetries arising from these intelligent systems (Santos, 2021SANTOS, Gabriela da Silva. Do risco de lesão aos direitos e garantias fundamentais diante da propensão estereotipada da Inteligência Artificial. 2021. Monografia de Conclusão de Curso, Direito. Unisul. Araranguá. 49p. Disponível em: Disponível em: https://repositorio.animaeducacao.com.br/handle/ANIMA/19598 . Acesso em: 04 out. 2022.
https://repositorio.animaeducacao.com.br...
). It is in this sense that it is crucial to create a network of legal protection that guarantees control of the network by humans, robustness, security, privacy, data governance, transparency, equality, diversity, non-discrimination, social and environmental welfare, and a system of accountability for illegal acts (European Commission, 2021).

3. POSSIBLE FORMS OF AI REGULATION AND THE EUROPEAN COMMISSION’S OPTION

The regulation consists of a structural system composed of an organized or connected group of objects (terms, units, or categories) forming a complex structure. The function of law is crystallized in a system of rules and institutions that sustain civil society, facilitate orderly interaction, and resolve conflicts and disputes arising from the rules. Within this logic, the law can be created through different processes, for example, through negotiations between those addressed to the standard in question, by imposing legal rules through the regulatory body, or by evolving self-regulation mechanisms. The legal system is not a predetermined construction but is incorporated through rules in socially relevant systems (Weber, 2021WEBER, Rolf H. Artificial Intelligence ante portas: Reactions of Law. J Multidiciplinary Scientific Journal, v. 4, n. 3, p. 486-499, 2021. Disponível em: Disponível em: https://www.mdpi.com/2571-8800/4/3/37 . Acesso em: 30 set. 2022.
https://www.mdpi.com/2571-8800/4/3/37...
).

As for the regulatory system, the regulatory authority has at its disposal two distinct approaches: a tough regulatory approach and another soft approach. The tough regulatory approach seeks to create a legal framework appropriate to AI problems in research, development, and use. From this orientation, rules, laws, or regulations are created that define the space for the performance of AI technologies, as well as liability derived from the possible damages they can cause. Basically, it is a legal discussion led by lawyers, jurists, or policymakers (Alegría, 2022ALEGRÍA, Jonathan Piedra. Descolonizando la Ética de la Inteligencia Artificial. Dilemata, n. 38, p. 247-258, 2022. Disponível em: Disponível em: https://www.dilemata.net/revista/index.php/dilemata/article/view/412000447 . Acesso em: 15 set. 2022.
https://www.dilemata.net/revista/index.p...
).

One of the prerequisites for formulating regulatory rules is the availability of information about the operation and scope of that technology in the day-to-day life of society. In this sense, the criticism of hard regulation is that in the absence of this information in a sedimented way, the regulation will undoubtedly fail and may also impair the evolution of technology.

As a result, the Regulatory Authority should pay close to the timing of regulation because if the technology is regulated too early, there is a risk of inhibiting innovation. On the other hand, it could cause serious harm to the population if it is too late. The deadline for formulating standards in today’s regulatory infrastructure may be inadequate for new disruptive technologies, such as AI. In the current regulatory process, regulatory agents promote public hearings, proposed rules, and present opinions, generating, in the end, the regulatory standard. However, this process is a photograph of the current stage of technology and will not represent any developments that may occur soon after the regulatory standard, generating a regulatory vacuum (Kaal; Vermeulen, 2017KAAL, Wulf A.; VERMEULEN, Erik P.M. How to Regulate Disruptive Innovation - From Facts to Data. Jurimetrics, v. 57, Issue n. 2, 2017. Forthcoming, U of St. Thomas (Minnesota) Legal Studies Research Paper n. 16-13. Disponível em: Disponível em: https://ssrn.com/abstract=2808044 . Acesso em: 26 out. 2021.
https://ssrn.com/abstract=2808044...
).

In turn, soft regulation is based on recommendations, statements, manifests, or proposals and does not have binding or coercive force of the law. Nevertheless, it is a regulation that serves as an element to maintain the debate on regulation and generate guidelines that can locate discussions on specific aspects of interest. This direction, for example, was taken by the European Union when presenting the Ethical Guidelines for Trusted AI (European Commission, 2019). Likewise, the United States presented the Preparation for the Future of Artificial Intelligence (Executive Office of the President of the United States, 2016EXECUTIVE OFFICE OF THE PRESIDENT OF THE UNITED STATES. Preparing for the future of artificial intelligence. 2016. Disponível em: Disponível em: https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf . Acesso em: 28 set. 2022.
https://obamawhitehouse.archives.gov/sit...
). It is an ethical approach led by philosophers specialized in the specific area of AI, political philosophers, and academics in general, which has developed much in recent years (Alegría, 2022ALEGRÍA, Jonathan Piedra. Descolonizando la Ética de la Inteligencia Artificial. Dilemata, n. 38, p. 247-258, 2022. Disponível em: Disponível em: https://www.dilemata.net/revista/index.php/dilemata/article/view/412000447 . Acesso em: 15 set. 2022.
https://www.dilemata.net/revista/index.p...
).

One of the reasons for the dissemination of these ethical guidelines, rather than a harsh regulation, is the problem known as the regulatory trilemma. The trilemma is a theoretical model used to identify systemic dysfunctions derived from the lack of harmony in the structural articulation between the spheres of law, politics, and society, which results in regulatory failures. One of the premises of this model is that the legitimation of legal norms is achieved in a reflexive phase of legal systems, where through indirect regulation (based on the self-referentiality of social systems), the regulation of the activities of that same society is achieved. From this, it is inferable that excessive regulation can bring unexpected effects creating systemic dysfunctions (Teubner, 1984TEUBNER, Gunther. Das regulatorische Trilemma: Zur Diskussion um post-instrumentale Rechtsmodelle. Quaderni Fiorentini per la storia del pensiero giuridico moderno, v. 13, n. 1, p. 109-149, 1984. Disponível em: Disponível em: http://www.centropgm.unifi.it/cache/quaderni/13/0112.pdf . Acesso em 29 set. 2022.
http://www.centropgm.unifi.it/cache/quad...
).

In a recent public consultation with the European Commission with stakeholders on the subject of AI, the response was unanimous in the need to act on it, pointing out that there are legislative gaps that need to be addressed. In the meantime, several stakeholders had also warned the Commission of the need to avoid duplication of rules, contradictory obligations, and excessive regulation (European Commission, 2021).

Some solutions have been presented to solve the problem of excessive regulation and the lack of rhythm between regulation and technology. Among them can highlight dynamic regulation, which seeks integration with dynamic elements such as feedback in regulation; principle-based regulation, which emphasizes general and abstract guiding principles; self-regulation or cooperative regulation, according to which the industry regulates itself under the supervision of the regulatory agency; the creation of specialized courts that provide faster and more sophisticated legal decisions in cases involving disruptive technologies of rapid transformation; sunset clauses, which result in the automatic expiration of legislation after a specific term, requiring the regulatory authority to review the regulatory standard and regulatory sandboxes, which are monitored safe environments, where technology is tested free of strict regulations and ensuring the safety of users (Menengola, 2022MENENGOLA, Everton. Blockchain na Administração Pública Brasileira. Rio de Janeiro: Lumen Juris, 2022. ).

Some scholars in AI regulation understand that harsh regulation is inadequate since it limits innovation due to the lack of knowledge of the technological reality and the commercial dynamics surrounding it. For this current, the regulation of AI needs an understanding that goes beyond the traditional classics of traditional legislation. A purely formal approach could destroy the system due to excessive regulation. In the case of AI, this problem implies that the application of “excessive regulation” could cause unexpected side effects due to its mandatory nature. Furthermore, the extended and carved regulation may not be suitable for a technology such as AI, where changes occur very quickly, or a high degree of creativity is necessary for its development (Alegría, 2022ALEGRÍA, Jonathan Piedra. Descolonizando la Ética de la Inteligencia Artificial. Dilemata, n. 38, p. 247-258, 2022. Disponível em: Disponível em: https://www.dilemata.net/revista/index.php/dilemata/article/view/412000447 . Acesso em: 15 set. 2022.
https://www.dilemata.net/revista/index.p...
).

Despite the criticism and challenges facing hard regulation, this was the option adopted by the European Commission when it published the artificial intelligence act (AIA) proposal of regulation on 21 April 2021. Considering the explanation of the legislative proposal, it seems that the European Union is aware of all these challenges because, early on, it presents Artificial Intelligence as a rapidly evolving technology capable of delivering economic and social benefits to the whole of Europe. However, the European legislature is also aware that the same elements and techniques that produce socio-economic benefits can also pose risks to citizens and society. Therefore, the motto of the legislation is to safeguard European technological development in AI, respecting the values, fundamental rights and principles of the European Union so that technology is at the service of the people in a safe way (European Commission, 2021).

4. THE PROPOSAL TO REGULATE AI IN THE EUROPEAN UNION: AIA - ARTIFICIAL INTELLIGENCE ACT

On 21 April 2021, the European Commission published its proposal for a new Artificial Intelligence Act (AIA) based on other legislative initiatives that provided for EU legislation on AI. The Proposal of the Artificial Intelligence Act (AIA) is the first attempt to draw up a general legal framework for AI by a large global economy. Indeed, European pioneering will make this regulation an example for other countries, making the well-known “Brussels effect”, a term coined by Bradford in a 2012 academic paper (Bradford, 2012BRADFORD, Anu. The Brussels effect. Northwestern University Law Review 107.1 (2012): 1-67. Disponível em: Disponível em: https://northwesternlawreview.org/issues/the-brussels-effect/ . Acesso em: 14 set. 2022.
https://northwesternlawreview.org/issues...
), replaced in his book “the Brussels effect: how the European Union rules the world” published in 2020 (Bradford, 2020BRADFORD, Anu. The Brussels effect: how the European Union rules the world. New York: Oxford University Press, 2020. Kindle edition. ).

The regulation will apply to private and public sector actors who are AI system providers and/or users. Furthermore, it will extend beyond the contours of the EU due to its extraterritorial scope, applying to third-country providers who place services with AI systems in the EU and to providers whose AI systems produce outputs that are used in the EU. These issues further strengthen the Brussels effect of this regulation; i.e., like the GDPR, EU rules for AI can extend across much of the world as international companies comply with EU rules and standardize these practices in other jurisdictions (Floridi, 2021FLORIDI, Luciano. The European Legislation on AI: A brief analysis of its philosophical approach. Philosophy & Technology, v. 34, n. 2, p. 215-222, 2021. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s13347-021-00460-9 . Acesso em: 07 out. 2021.
https://link.springer.com/article/10.100...
).

From a general point of view, the AIA is a regulation, not a directive, so, like the GDPR, it will come into force on a set date in all 27 members and will have binding legal force across the EU.

From an ethical point of view, the AIA inherits the same fundamental approach seen in the GDPR, which is based on protecting human dignity and fundamental rights. However, the AIA uses anachronic terminology to define its approach as human-centered, that is, as an approach that puts humanity at the center of technological development. This view seems dangerous since one should also consider the environment as an essential fact. Homocentric seems to be synonymous with anthropocentric, and it is known how much the planet has suffered from the obsession of humanity with its importance and centrality, as if everything were always at its service, including in all aspects of the natural world, no matter what costs and losses (Floridi, 2021).

Fortunately, despite the unfortunate and obsolete terminology, the bill’s overview also highlights the value of AI as a technology that can be very sustainable and provide support against pollution and climate change and for the sustainable development of information societies (Cowls et al., 2021COWLS, Josh et al. The AI gambit: leveraging artificial intelligence to combat climate change-opportunities, challenges, and recommendations. Ai & Society, p. 1-25, 2021. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s00146-021-01294-x . Acesso em: 07 out. 2022.
https://link.springer.com/article/10.100...
). It seems clear that the AIA’s approach strengthens the idea that environmental protection should be a cross-cutting issue in the EU.

The European Union bill (AIA) is the most ambitious attempt to regulate AI systems currently in the world. With this legislation, the EU seeks that the systems used in its territory or that affect the countries of the bloc are safe and respect the laws or the values existing in its territory (Mökander et al., 2022MÖKANDER, Jakob et al. Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed European AI regulation. Minds and Machines, v. 32, n. 2, p. 241-268, 2022. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s11023-021-09577-4 . Acesso em: 14 set. 2022.
https://link.springer.com/article/10.100...
). Indirectly, the EU seeks to encourage investment and innovation in AI and build public confidence that AI systems are used to respect fundamental rights and European values.

The AIA lays down horizontal rules for developing, commodifying, and using AI-driven products, services, and systems within the EU. The proposed regulation provides fundamental rules of artificial intelligence that apply to all industries. In addition, the bill introduces a product safety structure built around a set of four categories of risk, minimum risk, limited risk, high risk and unacceptable risk. Thus, a lighter legal regime applies to ai systems of minimal risk. In contrast, the degree of intensity of legal control increases as the risk grows until the exclusion of applications containing an unacceptable risk (Kop, 2021KOP, Mauritz. EU Artificial Intelligence Act: The European Approach to AI. Transatlantic antitrust and IPR Developments, 2021. Disponível em: Disponível em: https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/ . Acesso em: 12 out. 2022.
https://law.stanford.edu/publications/eu...
).

The draft regulatory bill also provides for the imposition of requirements for market entry and certification of high-risk AI systems through a mandatory Ce mark procedure.1 1 CE mark is an indication of mandatory compliance for several products marketed in the EU. The certification indicates that the product meets all legal requirements and that it complies with product safety directives in the EU. (HILL; BALMAN, 2015) To ensure fair results, the regulatory region imposed by the bill also applies to machine learning training, testing, and validation datasets. Evidence remains that the European legislature seeks to consolidate by codifying the high standards of trust, seeking maximum application of democratic values, human rights, and the rule of law.

As stated above, the AIA is based on a four-level risk structure, each defining requirements and obligations to AI system providers and users and identifying potential health, safety, and fundamental rights risks. Governance and requirements differ between risk levels. AI systems that are considered an unacceptable risk, for example, by clearly harming people’s safety, will be banned. On the other hand, AI systems that pose little or no risk are not subject to any interventions stipulated in the AIA, exempt from some specific transparency obligations. Most AI systems are believed to fall into this category (Ebers, 2022EBERS, Martin. Standardizing AI-The Case of the European Commission’s Proposal for an Artificial Intelligence Act. The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics, 2021. Disponível em: Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3900378 . Acesso em: 07 out. 2022.
https://papers.ssrn.com/sol3/papers.cfm?...
).

AI Systems that pose minimal or no risk, such as spam filters, will be allowed without restrictions, but providers will be encouraged to adhere to voluntary codes of conduct. The European Commission predicts that most AI systems will fall into this category. AI system providers that pose a limited risk, such as chatbots, will be subject to transparency obligations (e.g. technical documentation on function, development, and performance) and may choose to adhere to voluntary codes of conduct. Transparency obligations will serve, among others, to enable users to make informed decisions about how they integrate AI software into their products and/or services. AI systems that pose a high risk to fundamental rights will require ex-ante and ex-compliance assessments (CDEI, 2021CDEI - Centre for Data Ethics and Innovation. The European Commission’s artificial intelligence act highlights the need for an effective AI assurance ecosystem. 2021. Disponível em: Disponível em: https://cdei.blog.gov.uk/2021/05/11/the-european-commissions-artificial-intelligence-act-highlights-the-need-for-an-effective-ai-assurance-ecosystem/ . Acesso em: 14 set. 2022.
https://cdei.blog.gov.uk/2021/05/11/the-...
).

To ensure a consistent level of protection against all high-risk AI systems, the EU Charter of Fundamental Rights and EU international trade commitments (AIA, Memorandum 13) (European Commission, 2021EUROPEAN COMMISSION. Charter of fundamental rights of the European Union. Brussels, 2012. Disponível em: Disponível em: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT . Acesso em: 02 out. 2022.
https://eur-lex.europa.eu/legal-content/...
) has been adopted as a common standard. Requirements for high-risk AI systems include, but are not limited to, establishing a risk management system, identifying, and mitigating known and foreseeable risks, and appropriate testing and validation (AIA, Title III Chapter 2). However, the AIA does not necessarily define rules for specific technologies. Instead, it seeks to establish processes to identify these use cases that require additional layers of governance to support specific policy goals (Mökander, 2022MÖKANDER, Jakob et al. Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed European AI regulation. Minds and Machines , v. 32, n. 2, p. 241-268, 2022. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s11023-021-09577-4 . Acesso em: 14 set. 2022.
https://link.springer.com/article/10.100...
).

The European Union’s objective with the AIA is to ensure a homogeneous strategy for the digital market of the bloc’s countries. It seeks to ensure the proper functioning of the internal market by creating harmonized watering for development, placing on the Union market and using products and services that integrate AI technologies or are provided as autonomous AI systems. The rush to pass a regulation that applies to all countries in the bloc is because some Member States are already creating national rules regulating the use of AI, and this could bring some problems, including fragmentation of the internal market concerning the requirements needed to develop AI-based products and services and reducing legal certainty for suppliers and users of AI systems (European Commission, 2021EUROPEAN COMMISSION. Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Brussels. 2021. Disponível em: Disponível em: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 . Acesso em: 02 oct. 2021.
https://eur-lex.europa.eu/legal-content/...
).

Whereas AI depends on a broad set of data and can be integrated into any product or service that circulates freely on the European market, this implies that the objectives of the proposed regulation cannot be achieved effectively only by the action of the Member States. Adopting national approaches to problem-solving would only create more legal uncertainty and obstacles and slow the acceptance of artificial intelligence by the market (European Commission, 2021).

Some features of AI, for example, opacity, complexity, data dependency and autonomous behavior, can adversely affect a set of fundamental rights enshrined in the EU Charter of Fundamental Rights (European Commission, 2012EUROPEAN COMMISSION. Charter of fundamental rights of the European Union. Brussels, 2012. Disponível em: Disponível em: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT . Acesso em: 02 out. 2022.
https://eur-lex.europa.eu/legal-content/...
). Thus, the proposed regulation seeks to ensure a high level of protection of these fundamental rights and aims to address the various risks through a clearly defined risk-based approach.

The proposed regulation presented by the European Commission may improve and promote the protection of the rights enshrined in the EU Charter of Fundamental Rights, namely: the right to human dignity (Article 1), respect for private and family life and the protection of personal data (Articles 7 and 8), non-discrimination (Article 21) and equality between men and women (Article 23).

Furthermore, the proposal seeks to avoid an inhibiting effect on the rights to freedom of expression (Article 11) and freedom of assembly (Article 12), to ensure the protection of the right to effective action and an impartial tribunal and the presumption of innocence and defense rights (Articles 47 and 48), as well as the right to good administration.

Moreover, the proposal will positively affect the rights of some particular groups, such as workers’ rights to fair and equitable working conditions (Article 31), the right to a high level of consumer protection (Article 28), children’s rights (Article 24) and the right of integration of persons with disabilities (Article 26). In addition, the right to a high level of environmental protection and improvement of its quality (Article 37) is also relevant, including concerning the health and safety of citizens.

Ex-ante testing, risk management and human oversight obligations will also facilitate respect for other fundamental rights, minimizing the risk of AI-assisted wrong or biased decisions in critical areas such as education and training, employment, essential services, and order maintenance. public and particularly within the judiciary (as AI is increasingly being used in this sector - including in decisions to control administrative acts) (Cristóvam, Schiefler and PeixotoCRISTÓVAM, José Sérgio da Silva; SCHIEFLER, Eduardo André Carvalho; PEIXOTO, Fabiano Hartmann. A inteligência artificial aplicada à criação de uma central de jurisprudência administrativa: o uso das novas tecnologias no âmbito da gestão de informações sobre precedentes em matéria administrativa. Revista do Direito. Santa Cruz do Sul. v. 3, n. 50, p. 18-34, jan./abr. 2020. Disponível em: Disponível em: https://www.academia.edu/83462726/A_IA_CENTRAL_DE_JURISPRUDENCIA_ADMINISTRATIVA Acesso em: 17 out. 2022.
https://www.academia.edu/83462726/A_IA_C...
). Furthermore, if violations of fundamental rights continue to occur, those affected have access to effective remedies by ensuring transparency and traceability of AI systems, coupled with solid ex-post controls (European Commission, 2021).

Member States will be required to appoint one or more competent national authorities to enforce the regulation at the national level in coordination with existing sectoral legislation. In addition, Member States will be tasked with establishing rules on penalties, which should be proportional to the size of the company and the market, dissuasive and consider the interests of Small and Medium Enterprises (SMEs) and startups.

If the Act is passed as it stands, the maximum fines will be EUR 30,000,000 or 6% of the total annual worldwide turnover of the previous year, which is higher. In addition, Member States will be encouraged to launch AI regulatory sandboxes to promote safe testing and adoption of AI systems under the direct guidance and supervision of competent national authorities, with preferential treatment for SMEs and startups to support innovators with fewer resources. Competent authorities should also provide personalized guidance to support SMEs and startups to ensure that regulation does not smother that innovation.

A new European AI Council will be set up to facilitate the consistent implementation of the regulation, composed of representatives of the competent national authorities of the Member States, the European Data Protection Supervisor, and the Commission. The Council will be influential in determining which AI technologies are classified as “high risk”. It will be complemented by a public database of high-risk AI systems managed by the Commission. While the proposed AI Regulation still needs to go through the EU’s ordinary legislative procedure before it comes into force, the legislative financial statement states that implementation costs are budgeted for the fiscal year 2023 (CDEI, 2021).

5. CONCLUSION

The European Commission’s proposal is the world’s first attempt to gain legislative control over the AI phenomenon. One of the significant advantages is that the proposal follows a risk-based approach and imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights.

The European legislature has a clear choice for the tough regulatory approach. To this end, the AIA is a legal text that, if adopted, will have an immediate and binding application to all EU Member States. However, as seen in the course of the study, there is an interest from the EU in standardizing regulatory rules on the use of AI to prevent each Member State from creating its guidelines, which could create legal uncertainty and affect the development of technology.

Despite the limitations imposed by the AIA on regulatory systems classified as high risk, it turns out that the bill seeks to avoid stifling technology, allowing an AI ecosystem to be created on EU territory. This intention is perceived, for example, when the Act suggests the creation of a regulatory sandbox so that small businesses and startups have space for innovation and test their products in a controlled environment with less incidence of regulatory standards that could inhibit their creativity.

Another relevant point the Act brings is attention to the environment and sustainability. Although obsolete terminology is used to define its approach as human-centered, which may imply that the environment would be in the background, analyzing the set of standards established in the project, it is possible to highlight its concern for the environment. Throughout the project, AI is highlighted as a technology that can be sustainable, providing support against pollution and climate change and for the sustainable development of the information society.

The AIA will also encourage developers to deploy trusted AI by design, i.e., the deployment of democratic values and fundamental rights from the first line of code created.

Finally, the AIA will also impact the regulatory regime of other countries with commercial relations with the European Union, thus asserting the Brussels effect dealt with in the study. Countries like the United States and China can incorporate standards of values that protect users’ fundamental rights in their policies and regulatory standards to access the European market.

References

  • AI HLEG - High-Level Expert Group on Artificial Intelligence. A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines. Brussels. 2019. Disponível em Disponível em https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines Acesso em: 07 out. 2022.
    » https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines
  • ALEGRÍA, Jonathan Piedra. Descolonizando la Ética de la Inteligencia Artificial. Dilemata, n. 38, p. 247-258, 2022. Disponível em: Disponível em: https://www.dilemata.net/revista/index.php/dilemata/article/view/412000447 Acesso em: 15 set. 2022.
    » https://www.dilemata.net/revista/index.php/dilemata/article/view/412000447
  • ASAN, Onur; CHOUDHURY, Avishek. Research trends in artificial intelligence applications in human factors health care: mapping review. JMIR human factors, v. 8, n. 2, 2021. Disponível em: Disponível em: https://humanfactors.jmir.org/2021/2/e28236 Acesso em: 26 set. 2022.
    » https://humanfactors.jmir.org/2021/2/e28236
  • BRADFORD, Anu. The Brussels effect. Northwestern University Law Review 107.1 (2012): 1-67. Disponível em: Disponível em: https://northwesternlawreview.org/issues/the-brussels-effect/ Acesso em: 14 set. 2022.
    » https://northwesternlawreview.org/issues/the-brussels-effect/
  • BRADFORD, Anu. The Brussels effect: how the European Union rules the world. New York: Oxford University Press, 2020. Kindle edition.
  • BRESNAHAN, Timothy F.; TRAJTENBERG, Manuel. General purpose technologies’ Engines of growth’? Journal of Econometrics, v. 65, n. 1, p. 83-108, 1995. Disponível em: Disponível em: https://www.sciencedirect.com/science/article/abs/pii/030440769401598T Acesso em: 12 out. 2022.
    » https://www.sciencedirect.com/science/article/abs/pii/030440769401598T
  • CDEI - Centre for Data Ethics and Innovation. The European Commission’s artificial intelligence act highlights the need for an effective AI assurance ecosystem. 2021. Disponível em: Disponível em: https://cdei.blog.gov.uk/2021/05/11/the-european-commissions-artificial-intelligence-act-highlights-the-need-for-an-effective-ai-assurance-ecosystem/ Acesso em: 14 set. 2022.
    » https://cdei.blog.gov.uk/2021/05/11/the-european-commissions-artificial-intelligence-act-highlights-the-need-for-an-effective-ai-assurance-ecosystem/
  • COWLS, Josh et al. The AI gambit: leveraging artificial intelligence to combat climate change-opportunities, challenges, and recommendations. Ai & Society, p. 1-25, 2021. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s00146-021-01294-x Acesso em: 07 out. 2022.
    » https://link.springer.com/article/10.1007/s00146-021-01294-x
  • CRISTÓVAM, José Sérgio da Silva; SCHIEFLER, Eduardo André Carvalho; PEIXOTO, Fabiano Hartmann. A inteligência artificial aplicada à criação de uma central de jurisprudência administrativa: o uso das novas tecnologias no âmbito da gestão de informações sobre precedentes em matéria administrativa. Revista do Direito. Santa Cruz do Sul. v. 3, n. 50, p. 18-34, jan./abr. 2020. Disponível em: Disponível em: https://www.academia.edu/83462726/A_IA_CENTRAL_DE_JURISPRUDENCIA_ADMINISTRATIVA Acesso em: 17 out. 2022.
    » https://www.academia.edu/83462726/A_IA_CENTRAL_DE_JURISPRUDENCIA_ADMINISTRATIVA
  • CRISTÓVAM, José Sérgio da Silva; SCHIEFLER, Eduardo André Carvalho; SOUSA, Thanderson Pereira de. Administração pública digital e a problemática da desigualdade no acesso à tecnologia. International Journal of Digital Law, Belo Horizonte, ano 1, n. 2, p. 97-116, maio/ago. 2020. Disponível em: Disponível em: https://journal.nuped.com.br/index.php/revista/article/view/schiefler2020 Acesso em: 17 out. 2022.
    » https://journal.nuped.com.br/index.php/revista/article/view/schiefler2020
  • CURZON, James; et all. “Privacy and Artificial Intelligence. IEEE Transactions on Artificial Intelligence 2.2, 2021. p. 96-108. Disponível em: Disponível em: https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9450036 Acesso em: 29 set. 2022.
    » https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9450036
  • DIVINO, Sthéfano Bruno Santos. Desafios e benefícios da inteligência artificial para o direito do consumidor. Revista Brasileira De Políticas Públicas, 2021, Vol.11 (1). Disponível em: Disponível em: https://www.publicacoes.uniceub.br/RBPP/article/view/6669 Acesso em: 05 out. 2022.
    » https://www.publicacoes.uniceub.br/RBPP/article/view/6669
  • EBERS, Martin. Standardizing AI-The Case of the European Commission’s Proposal for an Artificial Intelligence Act. The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics, 2021. Disponível em: Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3900378 Acesso em: 07 out. 2022.
    » https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3900378
  • EUROPEAN COMMISSION. Charter of fundamental rights of the European Union. Brussels, 2012. Disponível em: Disponível em: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT Acesso em: 02 out. 2022.
    » https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT
  • EUROPEAN COMMISSION. Ethics guidelines for trustworthy AI. 2019. Disponível em: Disponível em: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai Acesso em: 28 set. 2022.
    » https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • EUROPEAN COMMISSION. Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Brussels. 2021. Disponível em: Disponível em: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 Acesso em: 02 oct. 2021.
    » https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • EUROPEAN COMMISSION. White paper on artificial intelligence: a European approach to excellence and trust. Brussels. 2020. Disponível em: Disponível em: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en Acesso em: 01 out. 2021.
    » https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
  • EXECUTIVE OFFICE OF THE PRESIDENT OF THE UNITED STATES. Preparing for the future of artificial intelligence. 2016. Disponível em: Disponível em: https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf Acesso em: 28 set. 2022.
    » https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
  • FERREIRA, Rodrigo; VARDI, Moshe Y. Deep tech ethics: an approach to teaching social justice in computer science. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 2021. p. 1041-1047. Disponível em: Disponível em: https://dl.acm.org/doi/abs/10.1145/3408877.3432449 Acesso em: 29 set. 2022.
    » https://dl.acm.org/doi/abs/10.1145/3408877.3432449
  • FLORIDI, Luciano. The European Legislation on AI: A brief analysis of its philosophical approach. Philosophy & Technology, v. 34, n. 2, p. 215-222, 2021. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s13347-021-00460-9 Acesso em: 07 out. 2021.
    » https://link.springer.com/article/10.1007/s13347-021-00460-9
  • HEDLUND, Maria. Distribution of forward-looking responsibility in the EU process on AI regulation. Frontiers in Human Dynamics, 2022. Disponível em: Disponível em: https://www.researchgate.net/publication/359895608_Distribution_of_Forward-Looking_Responsibility_in_the_EU_Process_on_AI_Regulation Acesso em: 16 set. 2022.
    » https://www.researchgate.net/publication/359895608_Distribution_of_Forward-Looking_Responsibility_in_the_EU_Process_on_AI_Regulation
  • HILL, D et al. 2015. What Is Ce Marking? How Technologies Are Classified, And How To Navigate The System. Value in Health 18, A367. Disponível em: Disponível em: https://www.valueinhealthjournal.com/article/S1098-3015(15)02806-5/fulltext Acesso em: 06 out. 2022.
    » https://www.valueinhealthjournal.com/article/S1098-3015(15)02806-5/fulltext
  • HUTSON, Matthew. The Opacity of Artificial Intelligence Makes It Hard to Tell When Decision-making Is Biased. IEEE Spectrum. 58.2, 2021. p. 40-45. Disponível em: Disponível em: https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9340114 Acesso em: 30 set. 2022.
    » https://ieeexplore-ieee-org.ez433.periodicos.capes.gov.br/stamp/stamp.jsp?tp=&arnumber=9340114
  • KAAL, Wulf A. Dynamic regulation for innovation. Perspectives in Law, Business & Innovation (Mark Fenwick, Wulf A. Kaal, Toshiyuki Kono & Erik PM Vermeulen eds.). New York Springer (2016), U of St. Thomas (Minnesota) Legal Studies Research Paper n. 16-22. Disponível em: Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2831040 Acesso em: 12 out. 2022.
    » https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2831040
  • KAAL, Wulf A.; VERMEULEN, Erik P.M. How to Regulate Disruptive Innovation - From Facts to Data. Jurimetrics, v. 57, Issue n. 2, 2017. Forthcoming, U of St. Thomas (Minnesota) Legal Studies Research Paper n. 16-13. Disponível em: Disponível em: https://ssrn.com/abstract=2808044 Acesso em: 26 out. 2021.
    » https://ssrn.com/abstract=2808044
  • KOP, Mauritz. EU Artificial Intelligence Act: The European Approach to AI. Transatlantic antitrust and IPR Developments, 2021. Disponível em: Disponível em: https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/ Acesso em: 12 out. 2022.
    » https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/
  • MA, Yifang et al. Artificial intelligence applications in the development of autonomous vehicles: a survey. IEEE/CAA Journal of Automatica Sinica, v. 7, n. 2, p. 315-329, 2020. https://ieeexplore.ieee.org/abstract/document/9016391 Acesso em: 26 set. 2022.
    » https://ieeexplore.ieee.org/abstract/document/9016391
  • MANHEIM, Karl; KAPLAN, Lyric. Artificial intelligence: Risks to privacy and democracy. Yale JL & Tech., v. 21, p. 106, 2019. Disponível em: Disponível em: https://heinonline.org/HOL/LandingPage?handle=hein.journals/yjolt21÷=4&id=&page= Acesso em: 29 set. 2022.
    » https://heinonline.org/HOL/LandingPage?handle=hein.journals/yjolt21÷=4&id=&page=
  • MENENGOLA, Everton. Blockchain na Administração Pública Brasileira. Rio de Janeiro: Lumen Juris, 2022.
  • MERCADER UGUINA, Jesús R., Algoritmos e inteligencia artificial en el derecho digital del trabajo. Valencia: Tirant lo Blanch, 2022.
  • MÖKANDER, Jakob et al. Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed European AI regulation. Minds and Machines, v. 32, n. 2, p. 241-268, 2022. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s11023-021-09577-4 Acesso em: 14 set. 2022.
    » https://link.springer.com/article/10.1007/s11023-021-09577-4
  • MÖKANDER, Jakob et al. Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed European AI regulation. Minds and Machines , v. 32, n. 2, p. 241-268, 2022. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s11023-021-09577-4 Acesso em: 14 set. 2022.
    » https://link.springer.com/article/10.1007/s11023-021-09577-4
  • NWAFOR, Ifeoma Elizabeth. AI ethical bias: a case for AI vigilantism (AIlantism) in shaping the regulation of AI. International Journal of Law and Information Technology, v. 29, n. 3, p. 225-240, 2021. Disponível em: Disponível em: https://academic.oup.com/ijlit/article/29/3/225/6389717?login=true Acesso em: 15 set. 2022.
    » https://academic.oup.com/ijlit/article/29/3/225/6389717?login=true
  • POSCHER, Ralf. Artificial intelligence and the right to data protection. Max Planck Institute for the Study of Crime, Security and Law Working Paper, n. 2021/03, 2021. Disponível em: Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769159 Acesso em: 04 out. 2022.
    » https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769159
  • PRATES, Marcelo OR; AVELAR, Pedro H.; LAMB, Luís C. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, v. 32, n. 10, p. 6363-6381, 2020. Disponível em: Disponível em: https://link.springer.com/article/10.1007/s00521-019-04144-6 Acesso em: 07 out. 2022.
    » https://link.springer.com/article/10.1007/s00521-019-04144-6
  • REJMANIAK, Rafal. Bias in Artificial Intelligence Systems. Bialostockie Studia Prawnicze, v. 26, p. 25, 2021. Disponível em: Disponível em: https://heinonline.org/HOL/LandingPage?handle=hein.journals/bialspw26÷=36&id=&page= Acesso em: 07 out. 2022.
    » https://heinonline.org/HOL/LandingPage?handle=hein.journals/bialspw26÷=36&id=&page=
  • REYNA, Justo; GABARDO, Emerson; SANTOS, Fábio de Sousa. Governo eletrônico, invisibilidade digital e direitos fundamentais. Sequência (Florianópolis), n. 85, p. 30-50, ago. 2020. Disponível em: Disponível em: https://periodicos.ufsc.br/index.php/sequencia/article/view/75278 Acesso em: 17 out. 2022.
    » https://periodicos.ufsc.br/index.php/sequencia/article/view/75278
  • SAMOILI, Sofia et al. AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence. 2020. Disponível em: Disponível em: https://eprints.ugd.edu.mk/28047/ Acesso em: 29 set. 2022.
    » https://eprints.ugd.edu.mk/28047/
  • SÁNCHEZ BRAVO, Álvaro Avelino. Marco Europeo para una inteligencia artificial basada en las personas. International Journal of Digital Law , Belo Horizonte, ano 1, n. 1, p. 65-78, jan./abr. 2020. Disponível em: Disponível em: https://journal.nuped.com.br/index.php/revista/article/view/sanchezbravov1n1/262 Acesso em: 30 set. 2022.
    » https://journal.nuped.com.br/index.php/revista/article/view/sanchezbravov1n1/262
  • SANTOS, Gabriela da Silva. Do risco de lesão aos direitos e garantias fundamentais diante da propensão estereotipada da Inteligência Artificial. 2021. Monografia de Conclusão de Curso, Direito. Unisul. Araranguá. 49p. Disponível em: Disponível em: https://repositorio.animaeducacao.com.br/handle/ANIMA/19598 Acesso em: 04 out. 2022.
    » https://repositorio.animaeducacao.com.br/handle/ANIMA/19598
  • TEUBNER, Gunther. Das regulatorische Trilemma: Zur Diskussion um post-instrumentale Rechtsmodelle. Quaderni Fiorentini per la storia del pensiero giuridico moderno, v. 13, n. 1, p. 109-149, 1984. Disponível em: Disponível em: http://www.centropgm.unifi.it/cache/quaderni/13/0112.pdf Acesso em 29 set. 2022.
    » http://www.centropgm.unifi.it/cache/quaderni/13/0112.pdf
  • UFERT, Fabienne. AI regulation through the lens of fundamental rights: How well does the GDPR address the challenges posed by AI? European Papers-A Journal on Law and Integration, v. 2020, n. 2, p. 1087-1097, 2020. Disponível em: Disponível em: https://www.europeanpapers.eu/it/europeanforum/ai-regulation-through-the-lens-of-fundamental-rights Acesso em 15 set. 2022.
    » https://www.europeanpapers.eu/it/europeanforum/ai-regulation-through-the-lens-of-fundamental-rights
  • UNITED KINGDON GOVERNMENT. National AI Strategy. 2021. Disponível em: Disponível em: https://www.gov.uk/government/publications/national-ai-strategy Acesso em: 14 set. 2022.
    » https://www.gov.uk/government/publications/national-ai-strategy
  • WEBER, Rolf H. Artificial Intelligence ante portas: Reactions of Law. J Multidiciplinary Scientific Journal, v. 4, n. 3, p. 486-499, 2021. Disponível em: Disponível em: https://www.mdpi.com/2571-8800/4/3/37 Acesso em: 30 set. 2022.
    » https://www.mdpi.com/2571-8800/4/3/37
  • 1
    CE mark is an indication of mandatory compliance for several products marketed in the EU. The certification indicates that the product meets all legal requirements and that it complies with product safety directives in the EU. (HILL; BALMAN, 2015HILL, D et al. 2015. What Is Ce Marking? How Technologies Are Classified, And How To Navigate The System. Value in Health 18, A367. Disponível em: Disponível em: https://www.valueinhealthjournal.com/article/S1098-3015(15)02806-5/fulltext . Acesso em: 06 out. 2022.
    https://www.valueinhealthjournal.com/art...
    )

Publication Dates

  • Publication in this collection
    10 July 2023
  • Date of issue
    2022

History

  • Received
    18 Oct 2022
  • Accepted
    10 Jan 2023
Programa de Pós-Graduação em Direito da Universidade Federal de Santa Catarina Centro de Ciências Jurídicas, Sala 216, 2º andar, Campus Universitário Trindade, CEP: 88036-970, Tel.: (48) 3233-0390 Ramal 209 - Florianópolis - SC - Brazil
E-mail: sequencia@funjab.ufsc.br