ABSTRACT
This paper analyzes the use of policy instruments to increase artificial intelligence patents. There has been a race to produce AI technologies to reduce countries' geopolitical risks and conditions of development and global technological insertion. AI technologies are strategic in the digital transformation scenario, requiring governments to design policies promoting AI technology, expanding data protection, and ensuring digital sovereignty. This paper compares public policy instruments from G20 member countries through the qualitative comparative analysis (QCA) methodology. The paper concludes that mixes of policy instruments matter in producing AI patents, demonstrating different configurations of mixes that make outcomes. The paper analyzes these instruments role in digital sovereignty and AI application in industry and governments.
KEYWORDS AI policy; Policy instruments; G20; Digital sovereignty; Digital transformation
1. Introduction
Artificial intelligence (AI) does not have a precise definition in the literature (WANG, 2019). AI does not mean automating repetitive activities through codes. AI involves emulating human intelligence to make decisions and develop a skill to solve problems more efficiently. The definition of AI is inserted as an abstraction of the human mind to design solutions that mimic rationality (SIMON, 1957, 1973, 1979, 1983; NEWELL; SIMON, 1961; RUSSELL, 1997), cognitive functions (RUSSELL; NORVIG, 2010), capacity (MINSKY, 1985), behavior (FLACH, 2012), and thought structure (MARKRAM, 2006). Applied to the public sector and industry, AI's goal is to produce predictions that exceed human performance to decide (RUSSELL; NORVIG, 2010). Understood in this way, AI represents systems that make decisions based on learning and increased performance.
The dissemination of these technologies has an impact on everyday life. AI's application can occur with the constitution of virtual agents in different policy domains, such as health and immigration (ZHENG et al., 2018; MEHR, 2017), security control, and monitoring and facial recognition to identify criminals (POWER, 2016), autonomous vehicles (JEFFERIES, 2016), chatbots for public and private services (MEHR, 2017), and image diagnosis that accelerates healthcare (COLLIER; FU; YIN, 2017). The growing application of AI has changed the industry's value chain, producing gains in productivity and competitiveness (SERGI et al., 2019; ZHONG et al., 2017).
The impacts of AI on society vary. There are problems with increasing industrial automation and jobs, issues with algorithmic biases in the dimensions of race and gender, reproducing social injustices (BENJAMIN, 2019), the creation of technological redlines in the urban space, and the reproduction of inequalities (EUBANKS, 2018), and excessive discretion of algorithms in the everyday life of citizens (DANAHER, 2016). AI applied to different social media has also changed forms of communication, producing political polarization and hate speech (SUNSTEIN, 2018). Technologies impact society in different ways, create new ethical problems, create new challenges for organizations, and create unprecedented forms of surveillance and control.
The advancement of AI technologies has changed global geopolitics. The globalization has created an interdependence of national states, in which the digital world, especially AI technologies, has started to influence domestic politics. The cyberspace subjects became part of the issues of the morphology of global power (NYE, 2014; KEOHANE; NYE, 1998). AI technologies have become a problem of global asymmetry regarding the development and dissemination of digital technologies, with new forms of colonization (COULDRY; MEJIAS, 2019) and a growing reaction from countries toward digital sovereignty (FLORIDI, 2020). A concept of digital sovereignty is promoting a global race for the development of AI technologies to ensure greater data protection and autonomy in the dimension of national governments and industry.
This article aims to analyze the instruments of AI policy by comparing the G20 countries. The problem that motivates this article is to question whether policy instruments matter in producing results regarding the advancement of AI technologies. Comparing the 20 largest global economies - G20 countries - we can analyze how the instrument configuration affects AI policy outcomes. In our argument, instrument mixes explain the policy success or failures. In the first section, we address the issue of the design of AI policy. In the second section of this article, the problem of policy instruments and their impact on AI policy will be addressed. In the third section, we present the research design. In the fourth section, we present the results and, in sequence, in the fifth section we discuss the results.
2. Designing policy to AI
There is no defined concept for AI policy. AI policy comprises a specific policy domain for science and technology and the growing dissemination of technological products in society. AI policy can mean two different conceptions for policies that promote the extension of AI. First, we have a perspective on the consequences of AI technologies on society. This is reflected in a complex order of issues. These policies are concerned with creating ethical and regulatory standards for applying AI in society (CALO, 2017). AI policy may have this concern to create an institutional framework that addresses the consequences of expanding AI in everyday life (CALO, 2017).
On the other hand, there are AI policies concerned with the extension and production of technologies in society. They do not address the consequences of AI but the essential factors to produce technologies (KATZ, 2017; DWIVEDI et al., 2021). These policies concern government initiatives to promote AI technology development to maintain digital sovereignty and ensure effective data protection mechanisms, cybersecurity, and internet governance. This movement of digital sovereignty stems from geopolitical problems related to cyberspace. Digital sovereignty is increasing the territorialization of internet governance (GLEN, 2014), international problems related to cyber war (SHACKELFORD, 2020), and data protection mechanisms for national citizens (HUMMELL; ALEXANDER; LIEBIG, 2018).
Considering these two dimensions, AI policy is the set of government actions to expand the benefits of AI technology in society while minimizing the potential risks and costs, as well as the ethical and regulatory concerns surrounding this technology. In the current stage, the race for the development of AI technologies is justified by the construction of digital sovereignty and the reduction of geopolitical risks of countries' dependence on technology.
The race for AI technologies makes countries accelerate the research and development and application as a global process (ALLEN; HUSAIN, 2017). The race for AI requires governments to promote policies that support and accelerate research development, and produce a growing patent dominance related to this technology. Patents can represent a proxy to measure the results of AI policy (ARCHIBUGI; PLANTA, 1996). The immediate result of AI policy is measured by the production of specific patents for these technologies. This acceleration of patent production reflects the race to govern AI technology and the problem of international asymmetries. The race for AI has accelerated unevenly since 2015, which reflects the asymmetries of global power. Figure 1 below shows the evolution of the number of AI patents in the G20 countries.
AI policy actors are universities, research centers and industry that develop basic and applied research using different AI methodologies – machine learning, deep learning, facial recognition, natural language process - and solutions applied to different knowledge types. Governments use policy instruments to encourage research and development in AI to produce this acceleration of technological products, applied especially to industry, markets, and governments. Governments mobilize policy instruments to accelerate research and development in AI, which act as an anchor that links basic knowledge to market demands and global dissemination through knowledge networks. The AI race is applied to several sectors, including the military sector (TADDEO; FLORIDI, 2018), the health sector (YU; KOHANE, 2019), and education (CAVE; ÓHÉIGEARTAIGH, 2018), among others.
This race for AI technology has raised the concern of different international organizations, especially the United Nations, the OECD, and the G20. These different international organizations' perspectives reinforce the character of cooperation and interdependence concerning research and development and the institutional and ethical parameters of technology (UNITED NATIONS, 2019; G20, 2020). The OECD has pointed to the challenges of digital transformation and how AI technologies can contribute to national development, requiring regulatory standards and institutional development to deal with AI challenges (ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT, 2019). AI policy, therefore, takes care of the incentives for research and development, to extend the technologies of AI to market, industry, and governments, as well as to take care of the regulation standards and governance that guide the production of technologies.
3. Instruments and portfolios to AI policy
Public policy are steered by specific institutions that ensure a mode of governance composed of rules, instruments, actors, interactions, and conceptions of values and ideas (HOWLETT, 2009; PETERS, 2005). The policy design objective is to produce a template to guide the various interventions carried out in the policy (PETERS, 2018). The design approach takes these interventions as the basic unit of analysis. It is a robust approach to analyze different policy impacts on society, intending to produce behavioral changes. The template formulated in the design guides the interventions to connect public policy rules, instruments, and objectives. Policy design is defined as “[...] the deliberate and conscious attempt to define policy goals and to connect them to instruments or tools expected to realize those objections” (HOWLETT, 2019, p. 92).
The essential design issue is that policy objectives connect with outcomes through a complex process of selecting and organizing instruments (HOWLETT, 2009). Policy instruments mean a relatively rational process of connecting the means necessary to achieve policy objectives (INGRAM; SCHNEIDER, 1990). Depending on the instruments' configuration, a different governance style affects the policy process (JORDAN et al., 2012). The instruments' mixes and their connection with the normative assumptions and the actors will determine the policy success or failure.
Policy instruments are “[...] governance techniques that, in one way or another, involve the use of state authority or its conscious limitation” (HOWLETT, 2005, 31). Policymakers choose policy instruments to achieve the policy objectives. The choice of instruments depends on context and time, and they affect the coherence, consistency, and congruence of the policy design. Policies are often created and implemented through combinations of policy instruments (HOWLETT, 2019). The policy instrument's choice creates a battle not only for the most efficient way to solve a problem but also for the influence that various affected interests will have on policy implementation (SALAMON, 2011; PETERS, 2005).
Additionally, the connection between policy objectives and outcomes does not occur through singular instruments. Governments employ instrument mixes to achieve policy objectives (HOWLETT, 2019). These instrument mixes fulfill the policy objectives by inducing the actors’ behavior. For example, regulatory instruments induce compliance, while financial instruments induce incentives for the actors to achieve their objective. In the case of AI policy, instruments can engage actors – universities, industry, and research centers - to produce more technologies within specified regulatory standards and compliance.
There is a tendency to analyze instruments as singular objects. However, advances in the understanding of instruments in policy design lie in analyzing combinations of tools to achieve an objective. The analysis of instrument portfolios makes it possible to understand this combination in producing public policy outcomes (BALI; HOWLETT; RAMESH, 2021). Public policies have multiple objectives, multiple actors, and multiple institutions. This complexity means that the policy design involves not only the choice of a tool but also different sets of instruments that form portfolios and reflect different objectives and different instruments together (SCHAFFRIN; SEWERIN; SEUBERT, 2014).
In this research, we analyzed the sets of instruments used by the G20 countries to achieve AI policy objectives. The research problem is to understand what sets of instruments are necessary for outcomes of the AI policies implemented by the G20 countries. The results are measured in terms of patent publication by each G20 country. The postulate is that governments can design policies that induce actors to produce more AI technologies, registering them in the form of patents. To understand the instrument sets, we used data from the OECD's AI Policy Observatory.
4. Research design
This research analyzes policies for AI development based on the comparison of different policy instruments. The methodology is the configuration of policy instruments' to develop AI through Qualitative Comparative Analysis (QCA) (RAGIN, 2008). The QCA method allows assessing cases belonging to a data structure in certain sets, identifying the relationships present between certain sets, which describe theoretically defined phenomena. The QCA method also makes it possible to interpret the relationships between sets in terms of sufficiency and necessity properties (SCHNEIDER; WAGEMANN, 2010). The QCA method begins with documenting the different condition settings associated with each case of an observed outcome. The conditions are organized into sets that are compared with the set of outcomes to understand the conditions that are necessary and sufficient to produce the outcome of interest. The set configurations are then subjected to a minimization procedure that identifies the simplest set of conditions that can account for all observed outcomes, as well as their absence. The QCA method is qualitative and requires the deepening of its conclusions with the support of theories. In the case of this research, we investigated how the sets of policy instruments implemented in AI policies by the G20 countries are necessary and sufficient conditions to produce patents in AI.
In this research, we used data regarding the policy instruments employed by the G20 countries collected in the OECD AI Policy Observatory. The objective of this Observatory is to monitor the instruments and strategies adopted by OECD and G20 member countries for the development of AI.
4.1 Case selection
The article is based on a specific data set of the countries that make up the G20. The cases are Argentina, Australia, Brazil, Canada, China, France, Germany, India, Italy, Japan, the Netherlands, Mexico, South Korea, Russia, Saudi Arabia, South Africa, Spain, Turkey, the United Kingdom, and the United States, comprising the 20 largest economies in the world. These cases were selected for convenience, considering the aspect of being the largest economies globally and sharing common perspectives for AI policy.
The G20 group was formed in 1999, initially bringing finance ministers together to establish cooperation policy for economic development. In 2008, the G20 group joined the heads of government of these countries. Since then, reforms to the new global banking supervision rules have taken place on the G20 agenda, governance reforms coordinated with the International Monetary Fund and the World Bank, and a real expansion of cooperation between them on various policy-related topics development (WADE, 2011).
Specifically, at the G20, several working groups seek to strengthen cooperation for development to shape common policies on different topics. In 2019, within the structure of the G20, the S20 was established, which is a multidisciplinary group dedicated to cooperation structures for science and technology. The S20 brings together the science academies of the countries that make up the G20. As a result of the G20 meeting in 2020, the report Foresight: Science for Navigating Critical Transitions - Task Force 3 - The Digital Revolution (G20, 2020) was published. The report contains recommendations for developing science and the technological transition of the member countries. Among these recommendations, the report specifies policies for the AI development, with a special focus on the perspective of open science and the development of governance mechanisms that reduce geopolitical risks for the use of AI. This position of the G20 stems from a conception of the opening of technology as a resource to reduce monopolies and enable the reduction of asymmetries (BOSTROM, 2017).
4.2 Outcome and calibration
In this article, the outcome is data on patents registered in different G20 countries. Data were collected at the OECD's AI Policy Observatory.1 The outcome used in the research design is listed in Table 1. In the case of the patent variables, we collected data between 2015 and 2020. The choice of this period stems from the fact that countries have initiated national strategies for the development of AI policies, intending to promote the acceleration of research and development of technologies that apply AI in industry and governments. This time frame is justified because, after 2015, there is this race for designing AI policies.
Data from this outcome indicate a deep gap between the G20 countries (see Figure 1). Inequalities between countries regarding patent registration are derived from different institutional frameworks. Data were collected concerning patents published between 2015 and 2020. The data on patent registration was normalized using the sum of patents published in this time frame. Having made this sum, the distribution of patents among the G20 countries was calculated by extracting the square root of the sum of patents in each country, divided by the number of patents with the maximum distribution. Normalization was achieved with the following formula:
Once this formula is applied, each case presents a normalized result of the sum of patents between 2015 and 2020. Once this normalization was carried out, to make it possible to compare the sets, we performed the calibration of the number of patents by country. The first step in the QCA method is outcome calibration. Calibration is the process of assigning set membership scores to cases by converting raw data for conditions and outcomes into values that represent the degree to which the case belongs to the set, between 0 (nonmembership) and 1 (full membership) (SCHNEIDER; WAGEMANN, 2010; KAHWATI; KANE, 2020). Table 2 presents the calibrated outcome and the justification for consolidation.
Calibration was performed automatically by the fsQCA 3.0 software. In addition to the calibration of normalized data, we compared the calibrated result, considering the scope and presence of countries in the production of AI patents. The cuts were confirmed, considering the qualitative assessments of the Stanford University AI Vibrancy Index to justify the choices made in Table 2.3 As the QCA method is qualitative, it requires a theory that allows the calibration process to be coherent. Confronting the calibrated outcome with the AI Vibrancy Index serves as an element to confirm the calibration process, which is required by the QCA method. The justification for this is to consider the extension of patent production and the insertion of AI technologies, according to the insertion and motivation captured by AI Vibrancy regarding the AI development. This vibration, captured by the index, consists of the dimensions of research and development, hiring, labor market expansion with AI, ethical challenges, education and training in AI, inclusion, and national strategies. The cases of Argentina, Mexico, Saudi Arabia, and Turkey were not confirmed with the AI Vibrancy Index. Although they are members of the G20, there is no data on them in the AI Vibrancy Index research.
4.3 Conditions – Policy instruments
The QCA method requires a description of the set of conditions that explain the outcomes. Data were collected from the OECD AI Policy Observatory and describe the policy instruments employed by each G20 member country, broken down by funding portfolios, collaborative structures, governance, and AI regulation. The financing portfolio lists the instruments used to increase the financial resources available for AI research and development. The portfolio of collaborative frameworks comprises instruments related to facilitating access to data, supporting infrastructure, and building networks between developers and industry to facilitate the AI development. The governance portfolio defines the institutions for AI policy design, including instruments for public consultation, regulatory oversight, public campaigns, standards and certifications, horizontal coordination structures, and policy intelligence. Finally, the regulatory portfolio intends to create a framework of incentives and controls for the development of AI. This portfolio includes technology extension and business advice for industry, regulation of emerging technologies such as AI, the internet of things, or blockchain. The regulation also includes the regulation of labor mobility, expanding training, and awards and public challenges that encourage technology development.
The instruments were analyzed to compose a table on each country's instruments in financing portfolios, collaborative structures, governance, and technology regulation. Table 3 below presents the initial conditions of the research design, associating the instrument portfolios.
This set of conditions was collected from the OECD AI Policy Observatory based on the initiatives of the G20 countries to carry out their AI policy. These conditions represent a set of instruments that are played to achieve policy outcomes. Table 4 below shows the set of instruments used by the G20 countries to carry out AI policies.
This set of conditions represents portfolios of AI policy instruments, aiming at a digital transformation process that encompasses markets, industry, and governments. The instruments introduced in Table 3 comprise four related portfolios for AI development. Within each of these portfolios, we have a set of objectives associated with conditions that reflect a comprehensive understanding of AI policies recognized in the literature. The analysis of the sets of these conditions allows us to understand which combination of instruments produces outcomes in the AI development.
5. Results
The results achieved in this research comprise the comparison of the sets of policy instruments and how they explain the set of patents produced between 2015 and 2020. The first requirement of the QCA method is to test the necessary and sufficient conditions relating instruments with AI policy outcomes in the G20. Table 5 presents the analysis of consistency and coverage for each instrument, as well as the set of negated instruments. In this case, the analysis of the necessary conditions allows affirming that the presence of certain policy instrument is condition for the outcome to be present.
A condition X is necessary if, whenever outcome Y is present, the condition is also present. Consistency indicates the proportion of the outcome included in the set of each condition. To claim that a condition is necessary, it must exhibit a consistency of at least 0.8 (SCHNEIDER; WAGEMANN, 2010). Coverage captures the degree to which a necessary condition is empirically relevant.
The data in Table 5 reveal some interesting factors for the empirical analysis of policy instruments. Looking at consistency, all negated sets reveal low consistency. The absence of instruments does not alter the outcomes in terms of patents. However, looking at the sets, only the instruments “project grants”, “collaborative networks and platforms,” and “consultation” present consistencies that point to them as essential for the development of AI policy. Grant instruments for projects, consultations for AI policy, and encouraging the formation of collaborative networks and platforms are sufficient causes to produce outcomes in terms of AI patents.
When presenting the truth tables, we chose to analyze each instrument portfolio separately.
5.1 Financing
Financial instruments are an essential condition for AI policy. The AI development, which is reflected in patents, depends on public investments and financing structures that enable the advancement of technology. In this portfolio set, we analyze whether institutional funding for research and development, project grants, funds for companies, funding for centers of excellence, and government procurement explain the result of AI patents. The truth table related to financial instruments resulted as follows:
The truth table for financial instruments (Table 6) indicates that countries adopt different combinations of instruments. Portfolios are very varied and fragmented, with countries adopting different strategies to finance AI development. The cases of India deserve to be highlighted, which, according to the OECD, does not employ any financial portfolio but produces interesting outcomes in the race for the domain of AI technology. The case of India reveals a counterintuitive relationship with the standards of AI policy.
There is fragmentation in the choice of policy instruments and few aggregated sets of countries. A group of countries formed by China, South Korea, and the United Kingdom use all financial policy instruments. For this set of countries, financial policy instruments result in the production of AI patents. Another group formed by Argentina, India, South Africa, and Spain does not use any financial policy instrument and produces few patents. The other cases have different instrument configurations. Japan, Germany, France, and the United States have different instrument configurations but with positive outcomes in patents. Finally, the rest of the countries have different instrument configurations but with low AI policy outcomes, as in the case of Brazil.
The intermediate solution for this truth table unfolds in analyzing combinations of financial policy instruments (Table 7). The analysis of policy portfolios means that the combination of specific instruments explains the production of patents in AI.
These intermediate solutions reveal the policy mixes needed to produce patent outcomes. These instrument configurations are highly consistent and indicate empirical interest. Looking at typical cases, the most consistent solution is project funding, intending to accelerate AI development and secure the financial resources to support research. For example, the United States created a public fund to finance AI projects, mainly applied to the military sector and industry. Brazil has recently had financing structures through the Financier of Studies and Projects (FINEP), with public calls for innovation with the AI application. It is interesting to observe the position, within the configuration of the sets, of the centers of excellence in AI. Centers of excellence are publicly funded or in a public–private partnership to promote the acceleration of AI development. The use of centers of excellence should be combined with funds for companies such as the UK, China, South Korea, Germany, France, and Russia. Another possible combination is that of centers of excellence with the absence of institutional funds, as in the Netherlands, Germany, Brazil, and Russia. Although this combination of instruments is counterintuitive, funding AI centers of excellence means directly developing the technology in public models or carried out in public–private partnerships.
5.2 Collaborative structures
The existence of portfolios of collaboration instruments represents the necessary conditions to produce AI patents, especially collaborative networks and platforms (Table 5). The portfolios of collaborative frameworks are intended to strengthen developer networks and build operational support for research and development. Analyzing the truth table about collaborative structures aims to understand how countries organize their collaborative instruments and what configurations are necessary conditions to explain the outcome.
The truth table (Table 8) indicates that groupings of countries by policy instruments represent opposing groups. On the one hand, the group formed by France, Germany, South Korea, the United Kingdom, and the United States applies for the three collaboration instruments and produces patents. For example, the British government created the Alan Turing Institute intending to create collaborative networks for developers and work on structuring and making available public data that the industry can use. China and Japan produce outcomes in terms of patents but apply different combinations of instruments. On the other hand, the group of countries formed by Australia, Brazil, India, Mexico, Russia, South Africa, and Spain does not apply any of these collaboration and operational support instruments for the AI development. The cases in Saudi Arabia and Turkey are interesting because they have dedicated support to research infrastructure and information and data access services, but they do not produce patents.
For example, with the support of the Council on Economic and Development Affairs, Saudi Arabia created the Data and AI Authority, which formulated the National Strategy for Data and AI (NSDAI). NSDAI has had extensive consultancy with the market and industry players, converging interests that make Saudi Arabia attractive for digital business and technology development. The Saudi Data Authority and AI have begun to centralize the entire process of collecting, storing, and sharing data to enable business partnerships and provide collaborative ways with market and industry interests for data access and the availability of public funds.
The intermediate solution (Table 9) for collaborative structure instruments indicates that collaborative networks and platforms, associated with dedicated support for research infrastructure or information services and data access structures, explain the outcomes with high consistency. In this case, Japan's outstanding importance concerning the production of AI patents stems from robust collaborative structures, which expand access to data and create networks and collaboration platforms that accelerate research development.
5.3 Governance structures
The analysis of governance structures aims to reflect the grouping of cases according to the configuration of governance instruments carried out by each G20 country. The distribution of cases in the truth table (Table 10) was very dispersed, with countries adopting different combinations of governance instruments. The relationship between governance instruments and the production of patents for AI does not demonstrate uniformity or substantive grouping of cases but a series of atypical cases. The configuration formed by Canada, Germany, and the United Kingdom, the group formed by China, India, South Korea, Mexico, and the Netherlands, and the other group formed by Brazil and Italy does not produce the outcomes in AI patents. Success stories in the application of governance tools are France and the United States.
The case of the United States represents the portfolio with the greatest diversity of governance instruments, achieving effective outcomes in terms of AI patents. The US combines public consultations, regulatory oversight, public information campaigns, horizontal policy coordination structures, product standards and certifications, and public policy intelligence. This mix of instruments provides the achievement of public policy objectives, resulting, in our analysis, in the following intermediate solution of the truth table depicted in Table 11.
The case of France, for example, suggests a mode of governance that dispenses horizontal structures of coordination, creating a variety of instruments applied in a more hierarchical style. The cases of Brazil and Italy suggest a low diversity of instruments in the governance mode, although they favor public consultations. For example, the Brazilian government published Brazil's AI strategy after an extensive public consultation, but it was not compiled in official documents. The case of Russia favors the definition of technology standards and certifications, without the other instruments of the governance portfolio.
5.4 Regulation
Regulation consists of a set of policy instruments through which the government can exercise its authority directly or indirectly in society. The extension of AI technologies produces many challenges for governments to regulate the different dimensions of technology production. The truth table analysis (Table 12) shows the settings of the regulatory instrument mixes on the cases.
Comparing the regulatory instrument portfolios of the G20 countries, we note a fragmentation of country experiences. The United States and South Korea regulate emerging technologies, create incentives to extend AI in industry and markets, and create incentives to labor force transition under AI dominance. The groups formed by Australia, Japan, the Netherlands, Brazil, Spain, South Africa, Argentina, Mexico, Saudi Arabia, and Turkey have not yet defined their regulatory instruments for AI. The cases of the United States, South Korea, France, China, Germany, and the United Kingdom define policies for the labor transition and building human capital, reaching leadership positions.
The intermediate solution presents (Table 13) the consistency and empirical interest for analyzing regulatory instruments. To produce AI patents, the combinations of technology extension and business advice (TEAS), emerging technology regulation (ETR), and labor mobility regulation and incentives (LMR) introduce the best outcomes. This result suggests that the regulation of emerging technologies and policies that deal with changes in the labor market produce outcomes in AI patents. Common to these two solutions is emerging technology regulation (ETR) as an essential topic for AI policy. Similar to the analysis of previous portfolios, the United States has a greater diversity of instruments, enabling the acceleration of research and development with regulation aimed at the extension of technology in the industry, creation of incentives for training and mobility of human capital and associated themes with the regulation of emerging technologies, such as data governance, control models and development platforms.
6. Discussion
Comparing G20 countries in AI policy presents interesting results in terms of the use of policy instruments to produce outcomes. The main finding, comparing the four policy portfolios for AI, is that the greater diversity of policy instrument configurations provides better outcomes, measured in terms of patent registrations. The race for AI technologies has revealed interesting outcomes with the systematic increase in patents that organize the transfer of technology from universities and research centers to industry and markets. The analysis of AI policies of the G20 countries introduced how policy instruments influence the patent production process through a variety of combinations. The analysis reveals that the different dimensions of the mix of instrument portfolios - financing, collaboration, governance, and regulation - imply different AI policy outcomes. The comparison of G20 countries shows that countries design instrument portfolios in different ways, making the diversity of instruments an important factor in explaining the outcomes achieved. The literature has pointed out that the diversity in the use of instruments in different portfolios is fundamental for the quality of policy design and effectiveness (FERNÁNDEZ-I-MARÍN; KNILL; STEINEBACH, 2021).
The configuration of policy instruments explains the outcomes achieved in the AI policy of the G20 countries, composing a complex framework of analysis motivated by a perspective of the global domain of technology. There are successful cases such as South Korea and the United States, which apply a complex and comprehensive set of instruments associated with a large capacity for patent registration in AI. On the other hand, developing countries lack capabilities and apply few portfolio combinations in the AI policy. The results allow research on AI policy to advance by linking the two dimensions mentioned above: the incentive for universities and research centers to accelerate the production of AI technologies on the one hand and an emerging process of regulation and institutional frameworks that make it possible to control the AI impacts on society on the other hand. Countries such as the United States, France, South Korea, the United Kingdom, Germany and China, which implement instruments in both dimensions, reap better outcomes in patents and the dissemination of AI.
In the case of the race for AI, the leadership position of the United States derives not only from the country's capacity for innovation but also from the diversity of policy instruments that the government designs to achieve outcomes in global terms of patents and technological domination.
An important point for future research is how these combinations of instruments increase or decrease the asymmetries related to AI technologies among the G20 countries. The results do not allow this inference. However, they indicate that the groups of G20 countries are increasing these asymmetries, creating processes of global dependence on technologies in relation to countries that lead the race for AI.
7. Conclusion
This article advances the issue of AI policy by focusing on using public policy instruments to produce patents. The advancement of AI technologies challenges countries regarding the problems of digital sovereignty. The race for AI technologies makes countries design policies that consider two essential dimensions, which are the incentives for communities of knowledge and practice to accelerate the production of technologies while creating institutions that support the impacts of AI on various issues, such as human capital mobility, regulatory oversight and ethics, and emerging technology regulation.
The performance of AI policy can be explained, in comparative research, through the diversity of policy instruments applied and the various combinations shaped into portfolios that governments use to achieve their objectives. The results are uneven and introduce how different combinations of instruments make different outcomes. There are limitations to the research. First, the link between AI policy instruments and patent production is indirect. The production of patents depends on different intellectual property structures and registration conditions, creating an unequal market between countries. Second, the performance of public policy is driven by many factors that interact with the instruments. For example, policy performance depends not only on instruments but also on state capacity, implementation structures, and various external factors, such as political stability and societal trust in the institutional functioning of governments.
In many cases, the current AI race situation can reproduce path dependence situations. AI policy is not necessarily a new topic, but it depends on a series of changes in the institutional structure that can reproduce past choices and create difficulties for changes in the present scenario. Currently, the race for AI has promoted aggressive policies aimed at accelerating the production of technology to contain the geopolitical risks of “falling behind.” However, there are difficulties in reversing the path or producing significant changes in AI policy. They tend to be incremental, and in the current stage of the G20 countries, the trend is that technological inequalities persist with the construction of an institutional framework containing geopolitical risks.
Portfolios of public policy instruments make the difference in this race for AI. However, they need to be analyzed in context and consider the challenges related to infrastructure issues - logical and computational - and market extension, which creates almost “natural” winners in this race.
Acknowledgements
The author is grateful to Flávio Cireno Fernandes and Alexandre Gomide for reading an initial version of this article and for collaborating with the use of qualitative methodologies.
-
1
The OECD collects this outcome using Microsoft's MAG tool, using a subset comprised of patents related to AI. A patent is about AI if it is tagged during the concept detection operation with a field of study categorized in either the "artificial intelligence" or the "machine learning" fields of study in the MAG taxonomy.
-
2
AI Vibrancy Index is the collection of AI development data and metrics that compares 29 countries across 23 different indicators. Conducted by Stanford University, AI Vibrancy Index collects data from a broad set of academic, private, and nonprofit organizations as well as more self-collected data and original analysis, data on global AI legislation records in 25 countries, and an in-depth analysis of technical AI ethics metrics. The result is aggregation of this data to show how vibrant a particular country is about AI development.
-
3
For more details, see the OECD AI Policy Observatory methodological note at Organisation for Economic Co-operation and Development (2020).
-
Source of funding: Conselho Nacional de Desenvolvimento Científico e Tecnológico – Bolsa de Produtividade em Pesquisa, fund number 303273/2020-8.
References
- ALLEN, J. R.; HUSAIN, A. The next space race is artificial intelligence. Foreign Policy, 3 nov. 2017.
-
ARCHIBUGI, D.; PLANTA, M. Measuring technology change through patents and innovation surveys. Technovation, Amsterdam, v. 16, n. 9, p. 451-519, 1996. http://dx.doi.org/10.1016/0166-4972(96)00031-4
» http://dx.doi.org/10.1016/0166-4972(96)00031-4 -
BALI, A. S.; HOWLETT, M. P.; RAMESH, M. Unpacking policy portfolios: primary and secondary aspects of tool use in policy mixes. Journal of Asian Public Policy, Abingdon, v. 14, p. 1-17, 2021. http://dx.doi.org/10.1080/17516234.2021.1907653
» http://dx.doi.org/10.1080/17516234.2021.1907653 - BENJAMIN, R. Race after technology: abolitionist tools for the New Jim Code. Cambridge: Polity Books, 2019.
-
BOSTROM, N. Strategic implications of openness in AI development. Global Policy, Oxford, v. 8, n. 2, p. 135-148, 2017. http://dx.doi.org/10.1111/1758-5899.12403
» http://dx.doi.org/10.1111/1758-5899.12403 -
CALO, R. Artificial intelligence policy: a primer and roadmap. UCD Law Review, Davis, v. 51-52, p. 399-435, 2017. Available from: <https://heinonline.org/HOL/LandingPage?handle=hein.journals/davlr51÷=18&id=&page=>. Access in: 4 Nov 2021.
» https://heinonline.org/HOL/LandingPage?handle=hein.journals/davlr51÷=18&id=&page= -
CAVE, S.; ÓHÉIGEARTAIGH, S. An AI race for strategic advantage: rhetoric and risks. In: AIES '18: 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2018, New Orleans, LA, USA. Proceedings.. New York: ACM, 2018. p. 2-3. https://doi.org/10.1145/3278721.3278780
» https://doi.org/10.1145/3278721.3278780 -
COLLIER, M.; FU, R.; YIN, L. Artificial intelligence: healthcare’s new nervous system. United States: Accenture, 2017. Available from: <https://www.accenture.com/t20170418T023052Z__w__/au-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf>. Access in: 4 Nov 2021.
» https://www.accenture.com/t20170418T023052Z__w__/au-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf -
COULDRY, N.; MEJIAS, U. Data colonialism: rethinking big data’s relation to the contemporary subject. Television & New Media, Minnesota, v. 20, n. 4, p. 336-349, 2019. http://dx.doi.org/10.1177/1527476418796632
» http://dx.doi.org/10.1177/1527476418796632 -
DANAHER, J. The threat of algocracy: reality, resistance, and accommodation. Philosophy & Technology, Dordrecht, v. 29, n. 3, p. 245-268, 2016. http://dx.doi.org/10.1007/s13347-015-0211-1
» http://dx.doi.org/10.1007/s13347-015-0211-1 -
DWIVEDI, Y. K. et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice, and policy. International Journal of Information Management, Guildford, v. 57, 101994, 2021. http://dx.doi.org/10.1016/j.ijinfomgt.2019.08.002
» http://dx.doi.org/10.1016/j.ijinfomgt.2019.08.002 - EUBANKS, V. Automating inequality: how high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press, 2018.
-
FERNÁNDEZ-I-MARÍN, X.; KNILL, C.; STEINEBACH, Y. Studying policy design quality in comparative perspective. The American Political Science Review, Evansville, v. 115, n. 3, p. 931-947, 2021. http://dx.doi.org/10.1017/S0003055421000186
» http://dx.doi.org/10.1017/S0003055421000186 - FLACH, P. Machine learning: the art and science of algorithms that make sense of data. New York: Cambridge University Press, 2012.
-
FLORIDI, L. The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy & Technology, Dordrecht, v. 33, n. 3, p. 369-378, 2020. http://dx.doi.org/10.1007/s13347-020-00423-6
» http://dx.doi.org/10.1007/s13347-020-00423-6 -
G20. Foresight: science for navigating critical transitions – task force 3 – digital revolution. Riyadh, 2020. Available from: <https://s20saudiarabia.org.sa/en/priorityareas/Pages/tf3digitalrevolution.aspx>. Access in: 4 Nov 2021.
» https://s20saudiarabia.org.sa/en/priorityareas/Pages/tf3digitalrevolution.aspx -
GLEN, C. M. Internet governance: territorializing cyberspace? Politics & Policy, London, v. 42, n. 5, p. 635-657, 2014. http://dx.doi.org/10.1111/polp.12093
» http://dx.doi.org/10.1111/polp.12093 - HOWLETT, M. P. Designing public policies: principles and instruments. London: Routledge, 2019.
- HOWLETT, M. P. What is a policy instrument? Tools, mixes, and styles of implementation. In: ELIADIS, P.; HILL, M. M.; HOWLETT, M. (Org.). Design government: from instruments to governance. Montreal: McGill Queen’s University Press, 2005.
-
HOWLETT, M. P. Governance modes, policy regimes, and operational plans: a multi-level nested model of policy instrument choice and policy design. Policy Sciences, Dordrecht, v. 42, n. 1, p. 73-89, 2009. http://dx.doi.org/10.1007/s11077-009-9079-1
» http://dx.doi.org/10.1007/s11077-009-9079-1 - HUMMELL, P.; ALEXANDER, F.; LIEBIG, J. Sovereignty and data sharing. ITU Journal. ICT Discoveries, Geneva, v. 23, n. 2, p. 1-10, 2018.
-
INGRAM, H.; SCHNEIDER, A. I. The behavioral assumptions of policy tools. The Journal of Politics, Gainesville, v. 52, n. 2, p. 510-529, 1990. http://dx.doi.org/10.2307/2131904
» http://dx.doi.org/10.2307/2131904 -
JEFFERIES, D. The automated city: Do we still need humans to run public services? Available from: <https://www.theguardian.com/cities/2016/sep/20/automated-city-robots-runpublic-services-councils>. Access in: 2 July 2018. 2016.
» https://www.theguardian.com/cities/2016/sep/20/automated-city-robots-runpublic-services-councils - JORDAN, A. et al. Environmental policy: governing by multiple policy instruments? In: RICHARDSON, J. (Org.). Constructing a policy state? Policy dynamics in the EU Oxford: Oxford University Press, 2012. p. 104-124.
- KAHWATI, L.; KANE, H. L. Qualitative comparative analysis in mixed methods research and evaluation Thousands Oaks: SAGE Publications, 2020.
-
KATZ, Y. Manufacturing an artificial intelligence revolution. SSRN, 27 nov. 2017. http://dx.doi.org/10.2139/ssrn.3078224
» http://dx.doi.org/10.2139/ssrn.3078224 -
KEOHANE, R.; NYE, J. Power and interdependence in the information age. Foreign Affairs, New York, v. 77, n. 5, p. 81-94, 1998. http://dx.doi.org/10.2307/20049052
» http://dx.doi.org/10.2307/20049052 -
MARKRAM, H. The Blue Brain project. Nature Reviews. Neuroscience, London, v. 7, n. 2, p. 153-160, 2006. http://dx.doi.org/10.1038/nrn1848
» http://dx.doi.org/10.1038/nrn1848 -
MEHR, H. Artificial intelligence for citizen services and government. Cambridge: Harvard Kennedy School, Ash Center for Democratic Governance and Innovation, 2017. Available from: <https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf>. Access in: 4 Nov 2021.
» https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf - MINSKY, M. The society of mind New York: Simon and Schuster, 1985.
-
NEWELL, A.; SIMON, H. A. Computer simulation and human thinking. Science, Washington, v. 134, n. 3495, p. 2011-2017, 1961. http://dx.doi.org/10.1126/science.134.3495.2011
» http://dx.doi.org/10.1126/science.134.3495.2011 -
NYE, J. S. The regime complex for managing global cyber activities. Ontario: Global Commission on Internet Governance, 2014. (Paper Series, 1). Available from: <https://www.cigionline.org/sites/default/files/gcig_paper_no1.pdf>. Access in: 4 Nov 2021.
» https://www.cigionline.org/sites/default/files/gcig_paper_no1.pdf - ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT – OECD. Data in the digital age, OECD going digital policy note. Paris: OECD Publishing, 2019. Available from: <www.oecd.org/going-digital/data-in-the-digital-age.pdf>. Access in: 4 Nov 2021.
-
ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT – OECD. OECD AI Policy Observatory. OECD.AI data from partners: methodological note. Paris: OECD Publishing, 2020. Available from: <https://www.oecd.ai/assets/files/Methodology_20200219.pdf>. Access in: 4 Nov 2021.
» https://www.oecd.ai/assets/files/Methodology_20200219.pdf - PETERS, B. G. Policy problems and policy design Chatelham: Edward-Elgar Publishing, 2018.
- PETERS, B. G. Policy instruments and policy capacity. In: PAINTER, M.; PIERRE, J. (Org.). Challenges to state policy capacity London: Palgrave Macmillan, 2005.
-
POWER, D. J. “Big Brother” can watch us. Journal of Decision Systems, London, v. 25, supl., p. 578-588, 2016. https://doi.org/10.1080/12460125.2016.1187420
» https://doi.org/10.1080/12460125.2016.1187420 - RAGIN, C. Redesigning social inquiry: fuzzy sets and beyond. Chicago: The University of Chicago Press, 2008.
-
RUSSELL, S. Rationality and intelligence. Artificial Intelligence, New York, v. 94, n. 1-2, p. 57-77, 1997. http://dx.doi.org/10.1016/S0004-3702(97)00026-X
» http://dx.doi.org/10.1016/S0004-3702(97)00026-X - RUSSELL, S.; NORVIG, P. Artificial intelligence: a modern approach. Englewood Cliffs: Prentice-Hall, 2010.
- SALAMON, L. The new governance and tools of public action: an introduction. The Fordham Urban Law Journal, New York, v. 28, n. 5, p. 1611-1674, 2011.
-
SCHAFFRIN, A.; SEWERIN, S.; SEUBERT, S. The innovativeness of national policy portfolios: climate policy change in Austria. Germany, and the UK. Environmental Politics, London, v. 23, n. 5, p. 860-883, 2014. http://dx.doi.org/10.1080/09644016.2014.924206
» http://dx.doi.org/10.1080/09644016.2014.924206 - SCHNEIDER, C. Q.; WAGEMANN, C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis. Cambridge: Cambridge University Press, 2010.
- SERGI, B. S. et al. Understanding industry 4.0. AI, the Internet of Things, and the future of work Bingley: Emerald Publishing, 2019.
- SHACKELFORD, S. Governing new frontiers in the information age: toward cyber peace. Cambridge: Cambridge University Press, 2020.
- SIMON, H. A. Models of man: social and rational. New York: John Wiley, 1957.
- SIMON, H. A. Models of thought. New Haven: Yale University Press, 1979. v. I.
-
SIMON, H. A. The structure of ill structured problems. Artificial Intelligence, New York, v. 4, n. 3-4, p. 181-201, 1973. http://dx.doi.org/10.1016/0004-3702(73)90011-8
» http://dx.doi.org/10.1016/0004-3702(73)90011-8 -
SIMON, H. A. Why should machines learn? In: MICHALSKI, R. S.; CARBONELL J. G.; MITCHELL T.M. (Org.). Machine learning: symbolic computation. Berlin: Springer, 1983. http://dx.doi.org/10.1007/978-3-662-12405-5_2
» http://dx.doi.org/10.1007/978-3-662-12405-5_2 - SUNSTEIN, C. #Republic: divided democracy in the age of social media. Princeton: Princeton University Press, 2018.
-
TADDEO, M.; FLORIDI, L. Regulate artificial intelligence to avert cyber arms race. Nature, London, v. 556, n. 7701, p. 296-298, 2018. http://dx.doi.org/10.1038/d41586-018-04602-6
» http://dx.doi.org/10.1038/d41586-018-04602-6 -
UNITED NATIONS. The age of digital interdependency: report of the Secretary-General’s High-level Panel on Digital Cooperation. New York: United Nations, 2019, Available from: <https://www.un.org/en/pdfs/DigitalCooperation-report-for%20web.pdf>. Access in: 4 Nov 2021.
» https://www.un.org/en/pdfs/DigitalCooperation-report-for%20web.pdf -
WADE, R. Emerging world order? From multipolarity to multilateralism n the G20, the World Bank, and the IMF. Politics & Society, Thousand Oaks, v. 39, n. 3, p. 347-378, 2011. http://dx.doi.org/10.1177/0032329211415503
» http://dx.doi.org/10.1177/0032329211415503 -
WANG, P. On defining artificial intelligence. Journal of Artificial General Intelligence, Vienna, v. 10, n. 2, p. 1-37, 2019. http://dx.doi.org/10.2478/jagi-2019-0002
» http://dx.doi.org/10.2478/jagi-2019-0002 -
YU, K. H.; KOHANE, I. S. Framing the challenges of artificial intelligence in medicine. BMJ Quality & Safety, London, v. 28, n. 3, p. 238-241, 2019. http://dx.doi.org/10.1136/bmjqs-2018-008551
» http://dx.doi.org/10.1136/bmjqs-2018-008551 - ZHENG, Y. et al. SmartHS: an AI platform for improving government service provision. In: AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 32., 2018, New Orleans, Lousiana, USA. Proceedings.. Palo Alto: AAAI Press, 2018. p. 7704-7711.
-
ZHONG, R. Y. et al. Intelligent manufacturing in the context of industry 4.0: a review. Engineering, China, v. 3, n. 5, p. 616-630, 2017. http://dx.doi.org/10.1016/J.ENG.2017.05.015
» http://dx.doi.org/10.1016/J.ENG.2017.05.015
Publication Dates
-
Publication in this collection
22 Aug 2022 -
Date of issue
2022
History
-
Received
04 Nov 2021 -
Reviewed
15 June 2022 -
Accepted
20 July 2022