Acessibilidade / Reportar erro

AI Literacy Research: Frontier for High-Impact Research and Ethics

INITIAL REFLECTIONS

Artificial intelligence (AI) has emerged as a driving force in scientific development, influencing areas such as Administration and Public Administration, which are central themes of the publication of the Brazilian Administration Review (BAR). Despite transformative promises, AI’s appropriate and ethical use in academia still presents substantial challenges, especially for researchers who require specific skills to understand, critique, and utilize these technologies (Dwivedi et al., 2023Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... & Wright, R. (2023). Opinion Paper:”So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
https://doi.org/10.1016/j.ijinfomgt.2023...
; Susarla et al., 2023Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, 34(2), 399-408. https://doi.org/10.1287/isre.2023.ed.v34.n2
https://doi.org/10.1287/isre.2023.ed.v34...
). Even though it is an important support tool to improve texts, the risks of hallucination (generating wrong information) and transformation of already written texts (plagiarism) worry the academic community. In addition, the continuous advancement of generative language models (LLMs) makes AI-written text detection tools often unreliable. AI literacy emerges, in this context, as an essential competence for researchers to deal with AI tools critically, ethically, and effectively.

This editorial discusses the growing need to promote AI literacy as a fundamental prerequisite for researchers. It reflects on the impacts of this competence on the training of scientists capable of exploring AI’s potential while understanding its limitations and ethical implications. We argue that research on AI literacy should be a priority, not only as an emerging field of study but also as a central axis in preparing the next generation of researchers, ensuring the responsible use of AI in advancing knowledge.

THE BASIS OF GENERATIVE ARTIFICIAL INTELLIGENCE IN RESEARCH

Generative AI utilizes LLM models developed to enable human-computer interaction through natural language. Derived from the field of natural language processing (NLP), its development dates back to the 1960s, when the first probabilistic and Markovian models were developed to map human language to the computer (Banh & Strobel, 2023Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33(1), 63. https://doi.org/10.1007/s12525-023-00680-1
https://doi.org/10.1007/s12525-023-00680...
). From there, the advancement of computational processing capacity allowed models with greater probabilistic calculation capacity to be implemented. At the same time, a more significant amount of available data improved the performance of the models, culminating in available solutions such as ChatGPT, Claude, Gemini, and others.

However, the models were not developed for specific purposes (Susarla et al., 2023Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, 34(2), 399-408. https://doi.org/10.1287/isre.2023.ed.v34.n2
https://doi.org/10.1287/isre.2023.ed.v34...
), and experts use the expression ‘parrot of probabilities’ (Bender et al., 2021Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots: Can language models be too big?. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA. https://doi.org/10.1145/3442188.3445922
https://doi.org/10.1145/3442188.3445922...
) to exemplify the lack of current awareness. According to the classical theory of AI (Russell & Norvig, 2016Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.), an intelligent agent comprises three pillars: a learning model, a search method, and a representation of knowledge. LLMs are perceived to have a sophisticated set of the first two, with the two core technologies for their operation (Zhang et al., 2023Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., & Hong, C. S. (2023). One small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era. arXiv preprint. https://doi.org/10.48550/arXiv.2304.06488
https://doi.org/10.48550/arXiv.2304.0648...
). However, their knowledge representation, the database they use to generate written content, is generic, which does not allow them to be positioned as specialist tools for any task, as shown in Figure 1.

Figure 1
Intelligent agents and LLMs.

Although there are efforts to overcome this challenge (Bi et al., 2024Bi, Z., Cheng, S., Chen, J., Liang, X., Xiong, F., & Zhang, N. (2024). Relphormer: Relational graph transformer for knowledge graph representations. Neurocomputing, 566(6), 127044. https://doi.org/10.1016/j.neucom.2023.127044
https://doi.org/10.1016/j.neucom.2023.12...
), competition and the need for fast performance do not make this third pillar an attractive field for organizations that develop the models since adding a layer of knowledge representation can imply a slowness not desired by the user. However, we recognize that LLMs and other generative AI tools have the potential to raise the quality of academic output by offering support in formal aspects of research. In addition, ‘the train has left the station.’ Therefore, it does not seem productive to discuss the permission or prohibition of the tools but rather to promote a joint reflection on the most suitable format. Thus, without a way to block its use and without wanting to prevent its good use, this editorial seeks to reflect on the use of AI and the respective models from the literacy of researchers through the best practices or guidelines for adopting AI tools in scientific production.

In this problem, there are some latent challenges in scientific production with AI. Although it does not exhaust the agenda, four axes are commonly raised as doubts by academics in events and lectures on the subject:

  • (1) Risk of plagiarism (AI rewriting of texts)

  • (2) Risk of hallucination (use of AI-generated information without evidence or reference)

  • (3) LLM as co-author (summaries; changes in text size; discussion of results)

  • (4) LLM as advisor (survey of options, research decisions, interpretation of data)

These challenges represent a fundamental change in the way scientific production is carried out, introducing new dynamics that directly affect the integrity and reliability of academic research. For example, plagiarism risk can be intense due to using AI tools to rewrite texts, making it challenging to detect unauthorized copies. Similarly, when information without evidence is generated, data hallucination jeopardizes the validity of the conclusions of studies that misuse AI. In addition, the role of LLMs as co-authors and advisors raises questions about the autonomy of the researcher and the originality of scientific contributions. Such risks can compromise the quality of research and undermine trust in science, making it crucial to develop clear guidelines and strengthen AI literacy in the academic environment.

THE FOUNDATION OF AI LITERACY

The concept of AI literacy refers to the ability to understand the fundamentals of AI, its applications, limitations, risks, and the ethical use of the technology. For Sperling et al. (2024Sperling, K., Stenberg, C. J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open, 6, 100169. https://doi.org/10.1016/j.caeo.2024.100169
https://doi.org/10.1016/j.caeo.2024.1001...
), the central dimensions of these capacities would be:

  • (1) Technical knowledge: refers to understanding the fundamentals of AI, including its concepts, algorithms, and how AI technologies work. It includes notions of machine learning, data processing, and the basic architecture of AI systems.

  • (2) Practical skills: involves the ability to use AI tools and platforms. For researchers, this could mean the ability to apply AI models in their research, analyze data with AI algorithms, and interpret results obtained from these tools.

  • (3) Digital literacy: relates to the ability to navigate and use digital technology in general, including systems that support AI, such as databases, data visualization software, and simulation platforms.

  • (4) Critical and ethical thinking: involves the ability to critically evaluate the results generated by AI, considering its limitations, biases, potential ethical implications, and social impacts. Therefore, it is essential to ensure that the use of AI is responsible and aligned with ethical research principles.

  • (5) Understanding contextual applications: refers to understanding how AI can be applied in different research contexts, considering the specific field of study (such as administration, education, public policy, etc.). It also includes assessing the risks and benefits in each application.

  • (6) Human-AI collaboration: the ability to collaborate with AI systems productively, maximizing the capabilities of these tools while recognizing their limitations. This includes knowing when to trust AI and when to lean on human judgment.

However, developing these competencies in the academic environment is still in its early stages. As pointed out by Du et al. (2024Du, H., Sun, Y., Jiang, H., Islam, A. Y. M., & Gu, X. (2024). Exploring the effects of AI literacy in teacher learning: An empirical study. Humanities and Social Sciences Communications, 11, 559. https://doi.org/10.1057/s41599-024-03101-6
https://doi.org/10.1057/s41599-024-03101...
), many educators need to gain the necessary knowledge to integrate AI into their pedagogical practices efficiently, and structured efforts are required to promote AI literacy at higher levels of education. In graduate studies, this gap is especially problematic since the role of professors in training a new generation of researchers and professionals is fundamental and involves a challenge: they need to teach the use of AI in the scientific process without being trained with this approach.

THE IMPORTANCE OF AI LITERACY FOR RESEARCHERS

When discussing the relevance of AI literacy research, it is essential to highlight the impacts of the limited domain of AI among faculty and students, especially in public administration and management. UNESCO’s AI Competency Framework emphasizes that educators should be more than passive consumers of technology; they should act as critical reviewers, designers, and facilitators of AI-assisted pedagogical practices. This implies that AI literacy should not be limited to technical knowledge but should encompass ethical, social, and critical skills. In this sense, the training of educators and researchers must contemplate the use of AI tools and the ability to reflect on the impacts of these technologies, considering the risks of algorithmic biases, data privacy, and possible repercussions on society.

Thus, the challenges that arise in the development of AI literacy are diverse. Professors who are new to research can face significant technical barriers, from the need for more knowledge about how to apply tools in their projects to the difficulty of interpreting the results produced by them. In addition, there is a risk of over-reliance on these tools without an adequate understanding of their limitations, which can compromise the quality of research. For more senior researchers, the challenge may lie in constantly updating in the face of rapid technological innovations and in adopting a critical stance in the face of the growing enthusiasm around AI. In this context, an institutional commitment is needed to offer training and pedagogical resources that enable teachers and students to deal with AI in an informed and responsible way.

For AI literacy research to advance solidly and positively transform researchers, academic institutions must provide robust support, making technological tools available and creating clear ethical guidelines. In many universities, there is a gap between the availability of AI technologies and the training for their use. Without proper training, researchers and students may face complex ethical dilemmas without the necessary resources to address them. AI, for example, can perpetuate biases already present in the data used, and researchers, especially beginners, often lack the necessary expertise to identify and mitigate these biases. This underscores the importance of institutional support beyond access to tools, including developing policies that promote the ethical use of AI.

An example can be seen in the case of beginner researchers who are using machine-learning tools in their investigations into consumer behavior. Without a deep understanding of the limitations of algorithms, these researchers can interpret the observed patterns, assigning causality where there is only correlation. On the other hand, more experienced researchers may have access to sophisticated computational resources. However, they still face the challenge of integrating new technologies without compromising the scientific validity of their research. Thus, the central argument for AI literacy research in academia lies in training academics to use AI in their research and in training professionals capable of co-creating ethical rules for using AI and, more importantly, understanding the implications of this technology for society. This corroborates the research of Ng et al. (2024Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., & Chu, S. K. W. (2024). Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. British Journal of Educational Technology, 55(3), 1082-1104. https://doi.org/10.1111/bjet.13411
https://doi.org/10.1111/bjet.13411...
), who identified that AI literacy directly impacts educators’ self-efficacy and their intentions to learn about AI. These aspects are vital precursors to successfully adopting the technology in teaching and research settings.

It is critical that this capacity building also includes a critical and reflective component, where researchers at different stages of their careers are encouraged to question the role of AI in their practices and explore how this technology can be used to promote efficiency, equity, and social justice. In addition, educational institutions should promote a culture of continuous learning, providing technical training, discussion forums, and courses that address the ethical implications of AI, empowering researchers to use these tools responsibly. Ultimately, implementing a comprehensive AI literacy program requires the involvement of the entire academic community. From early-career researchers to leaders in their fields, everyone should be empowered to understand the potential and challenges of AI, ensuring that its use is always aligned with the highest standards of ethics and scientific responsibility.

FINAL THOUGHTS

While AI literacy is widely recognized, effectively integrating this knowledge into scientific research faces challenges. Educational institutions must address technological infrastructure issues and inequality in access to AI tools. Making AI tools available to students is necessary for conducting high-impact research. With this access, many students can compete in innovation and scientific production in a scenario where AI is central to significant academic discoveries. To this end, universities must invest in acquiring and maintaining these technologies, ensuring that students from different backgrounds can enjoy the same learning opportunities.

In addition to the availability of technological resources, institutions need to ensure the integration of these tools into research programs. More is required to provide access; AI tools must be incorporated into academic practices from the beginning of the students’ journey, favoring continuous contact with the state of the art. The constant updating of technologies and methodologies should be a priority, allowing researchers to work with the most advanced techniques and explore new frontiers of knowledge. This translates into more innovative results and more significant scientific contributions, making AI-driven research not only more impactful but also more competitive on the global stage.

To foster this culture of AI literacy, higher education institutions should start discussing this topic at events to welcome first-year students. From the first contact with the university, students should be introduced to the importance of AI in research and the literacy necessary for its responsible use. In addition, AI literacy needs to be formally described in the standards of graduate and institutional programs, serving as a structuring axis of pedagogical and scientific guidelines. This ensures that the development of these competencies is not optional or secondary but an integral part of the training of new researchers. Another aspect is the incentive to hold academic events involving journal editors, promoting alignment on the correct use of AI in the scientific editorial process. Collaboration between researchers and editors can help clarify norms, best practices, and expectations related to the use of AI in submitted research, avoiding abuses or misunderstandings that could compromise scientific quality and integrity. Such events also serve as spaces for dialogue, where the academic community can share experiences and discuss the impacts of AI on scientific production.

Finally, institutions should encourage training courses for teachers, aiming to insert AI into the reality of their research projects. By acquiring greater familiarity with these tools, teachers will be better prepared to incorporate them into their investigations and, consequently, to teach their students more effectively. The more contact teachers have with AI, the greater their ability to use it critically and innovatively, creating a learning chain that will positively impact research quality and future scientists’ training. Thus, promoting a culture of AI literacy in higher education institutions requires a joint and strategic effort. From providing adequate infrastructure to continuously training faculty members and including discussions about AI in curricula and academic events, every aspect contributes to the training of researchers prepared to face the challenges and take advantage of AI’s opportunities in science.

REFERENCES

  • Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33(1), 63. https://doi.org/10.1007/s12525-023-00680-1
    » https://doi.org/10.1007/s12525-023-00680-1
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots: Can language models be too big?. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA. https://doi.org/10.1145/3442188.3445922
    » https://doi.org/10.1145/3442188.3445922
  • Bi, Z., Cheng, S., Chen, J., Liang, X., Xiong, F., & Zhang, N. (2024). Relphormer: Relational graph transformer for knowledge graph representations. Neurocomputing, 566(6), 127044. https://doi.org/10.1016/j.neucom.2023.127044
    » https://doi.org/10.1016/j.neucom.2023.127044
  • Du, H., Sun, Y., Jiang, H., Islam, A. Y. M., & Gu, X. (2024). Exploring the effects of AI literacy in teacher learning: An empirical study. Humanities and Social Sciences Communications, 11, 559. https://doi.org/10.1057/s41599-024-03101-6
    » https://doi.org/10.1057/s41599-024-03101-6
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... & Wright, R. (2023). Opinion Paper:”So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
    » https://doi.org/10.1016/j.ijinfomgt.2023.102642
  • Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., & Chu, S. K. W. (2024). Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. British Journal of Educational Technology, 55(3), 1082-1104. https://doi.org/10.1111/bjet.13411
    » https://doi.org/10.1111/bjet.13411
  • Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.
  • Sperling, K., Stenberg, C. J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open, 6, 100169. https://doi.org/10.1016/j.caeo.2024.100169
    » https://doi.org/10.1016/j.caeo.2024.100169
  • Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, 34(2), 399-408. https://doi.org/10.1287/isre.2023.ed.v34.n2
    » https://doi.org/10.1287/isre.2023.ed.v34.n2
  • Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., & Hong, C. S. (2023). One small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era. arXiv preprint. https://doi.org/10.48550/arXiv.2304.06488
    » https://doi.org/10.48550/arXiv.2304.06488
  • Data Availability:

    BAR - Brazilian Administration Review encourages data sharing but, in compliance with ethical principles, it does not demand the disclosure of any means of identifying research subjects.
  • Plagiarism Check:

    BAR maintains the practice of submitting all documents received to the plagiarism check, using specific tools, e.g.: iThenticate.
  • Peer review:

    is responsible for acknowledging an article’s potential contribution to the frontiers of scholarly knowledge on business or public administration. The authors are the ultimate responsible for the consistency of the theoretical references, the accurate report of empirical data, the personal perspectives, and the use of copyrighted material. This content was evaluated using the double-blind peer review process. The disclosure of the reviewers’ information on the first page is made only after concluding the evaluation process, and with the voluntary consent of the respective reviewers.

Edited by

Editor-in-Chief:

Ricardo Limongi https://orcid.org/0000-0003-3231-7515 (Universidade Federal de Goiás, Faculdade de Contabilidade Economia e Administração, Brazil).

Editorial assistants:

Eduarda Anastacio and Simone Rafael (ANPAD, Brazil).

Data availability

BAR - Brazilian Administration Review encourages data sharing but, in compliance with ethical principles, it does not demand the disclosure of any means of identifying research subjects.

Publication Dates

  • Publication in this collection
    04 Oct 2024
  • Date of issue
    2024

History

  • Published
    13 Sept 2024
ANPAD - Associação Nacional de Pós-Graduação e Pesquisa em Administração Av. Carneiro Leão, 825, Zona 04, Zip code: 87014-010, Tel.: (+55) (44) 3354-8545 - Maringá - PR - Brazil
E-mail: bar@anpad.org.br