SciELO - Scientific Electronic Library Online

 
vol.20JONES, Bronwen M. A.; BALL, Stephen J. (org.). Neoliberalismo e educação. Tradução: Janete Bridon. Ponta Grossa: Editora UEPG, 2025.Ethical focus on a common good - on strategies in use to promote academic and scientific integrity author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Share


Práxis Educativa

Print version ISSN 1809-4031On-line version ISSN 1809-4309

Práxis Educativa vol.20  Ponta Grossa  2025  Epub Sep 30, 2025

https://doi.org/10.5212/praxeduc.v.20.25388.057 

Thematic Section: Ethics and Integrity in Research in the Humanities and Social Sciences in the age of AI

Ethical and responsible use of GenAI in research context

Uso ético e responsável do IAGen em contexto de pesquisa

Uso ético y responsable de la IAGen en contexto de la investigación

Maria Isabel Gomes de Pinho* 
http://orcid.org/0000-0003-1714-8979

António Pedro Dias Costa** 
http://orcid.org/0000-0002-4644-5879

Cláudia Gomes de Pinho*** 
http://orcid.org/0000-0001-6249-5019

*University of Aveiro, Portugal, PhD. E-mail: <isabelpinho@ua.pt>.

**Research Centre on Didactics and Technology in the Education of Trainers, Department of Education and Psychology, University of Aveiro, Portugal, PhD. E-mail: <apcosta@ua.pt>.

***University of Aveiro, Portugal, MSc. E-mail: <claudiapinho@ua.pt>.


Abstract

This article begins by framing the intersection between ethics and artificial intelligence (AI), considering two main dimensions: normative ethics and applied ethics. The second section reinforces the need to move from the abstract to the practical level with an integrated perspective. The third section examines roles and responsibilities in AI safety. Next, the text focuses on Generative AI (GenAI) and academic integrity. From a user perspective, it explores an activity within the research context, emphasizing the application of GenAI to qualitative data analysis. Finally, the article presents the GenAI Governance Model, which outlines a comprehensive framework for the responsible and integrated implementation of GenAI in research environments. It is concluded that it is necessary to adopt a governance approach to ensure the responsible use of GenAI.

Keywords: Ethics; GenAI; Governance; Research; Responsible use

Resumo

Este artigo começa por enquadrar a intersecção entre a ética e inteligência artificial (IA), considerando duas dimensões principais: ética normativa e ética aplicada. A segunda secção reforça a necessidade de caminhar do nível abstrato para o nível prático com uma visão de integração. A terceira secção analisa os papéis e as responsabilidades na segurança da IA. De seguida, o texto foca na IA Generativa (IAGen) e na integridade académica. Na perspetiva do utilizador, examina uma atividade no contexto da investigação, com foco na aplicação da IAGen à análise de dados qualitativos. Por fim, o artigo apresenta o Modelo de Governança da IAGen, que delineia uma estrutura abrangente para a implementação responsável e integrada da IAGen em ambientes de pesquisa. Conclui-se que existe a necessidade de adotar uma abordagem de governança para assegurar o uso responsável da IAGen.

Palavras-chave: Ética; IAGen; Governança; Pesquisa; Uso responsável

Resumen

Este artículo comienza abordando la intersección entre la ética y la inteligencia artificial (IA), considerando dos dimensiones principales: ética normativa y ética aplicada. La segunda sección refuerza la necesidad de pasar del nivel abstracto al nivel práctico con una visión de integración. La tercera sección analiza los roles y responsabilidades en la seguridad de la IA. En seguida, el texto se centra en la IA Generativa (IAGen) y en la integridad académica. En la perspectiva del usuario, se examina una actividad en el contexto de la investigación, con énfasis en la aplicación de la IAGen al análisis de datos cualitativos. Finalmente, el artículo presenta el Modelo de Gobernanza de la IAGen, que define una estructura amplia para la implementación responsable e integrada de la IAGen en ambientes de investigación. Se concluye que existe la necesidad de un enfoque de gobernanza para garantizar el uso responsable de la IAGen.

Palabras clave: Ética; IAGen; Gobernanza; Investigación; Uso responsable

Ethics and AI

Ethics is a part of philosophy concerned with what is morally good and bad, right and wrong. Ethics can be divided into two main dimensions: normative ethics, which examines societal norms or rules, and applied ethics, which studies ethical issues in specific contexts. In the context of AI, we can divide it into two dimensions: ethics of AI and ethical AI (Siau & Wang, 2020).

The ethics of AI are part of the ethics of advanced technology that focus on robots and other artificially intelligent agents. They can be divided into robot ethics and machine ethics. Roboethics is concerned with the moral behaviors of humans as they design, construct, use, and interact with AI agents and the associated impacts of robots on humanity and society. Machine ethics deals with the moral behaviors of Artificial Moral Agents (AMAs), the field of research addressing the design of artificial moral agents. As technology advances and robots become more intelligent, robots or artificially intelligent agents should behave morally and exhibit moral values.

From Values to Practice

The normative content of the documents concerning ethical AI can span diverse concepts: values, principles, policies, standards, and guidelines. The relationship between these concepts can be summarized as follows: Values provide the underlying beliefs that inform the development of principles. Principles offer a framework for interpreting and applying values consistently. Policies operationalize values and principles into specific actions and rules to guide behavior and decision-making within organizations or societies. These concepts can be organized hierarchically, from an abstract level to an applied level (see Figure 1).

Source: Based on Mills et al. (2020).

Figure 1 From Values to Practice 

Values are fundamental beliefs that guide behavior and decision-making. Principles are permanent, universal, non-negotiable standards based on ethical and legal foundations (Ferrell et al. 2024). Principles are the fundamental truths or propositions that serve as the foundation for a system of belief or behavior. They are often derived from values and provide a framework for making decisions. The Universal Declaration of Human Rights' guiding principles have been adopted into many national constitutions and legal frameworks all around the world (UN, 1948).

The OECD AI Principles are the first intergovernmental standard on AI (OECD, 2025). They promote innovative, trustworthy AI that respects human rights and democratic values. Adopted in 2019 and updated in 2024, they are composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors (see Figure 2).

Source: OECD (2025).

Figure 2 From principles to Recommendations 

Everyone is entitled to personal data protection. This value is the starting point of the General Data Protection Regulation (GDPR), effective from 25 May 2018, which provides a legal framework for personal data protection in the EU, including data processed by AI systems. Access to data is crucial for the development of Artificial Intelligence in Education (AIEd). This requires contrasting regulatory approaches, from laissez-faire to heavily regulated options. Laissez-faire environments may facilitate the collection of more learner data for AIEd research but increase privacy risks and data misuse. Conversely, regulatory environments prioritizing privacy and data protection may enforce restrictions that limit some AIEd applications (Bai et al., 2024).

One of the first studies to promote a meta-analysis of published AI ethical principles was Anna Jobin and colleagues (2019). They identified 84 documents and presented 11 ethical principles: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity. They are core principles because they invoke those values that theories in moral and political philosophy argue to be intrinsically valuable, meaning their value is not derived from something else.

At Artificial Intelligence in Education (AIED), a study explored ethical concerns by mapping and analyzing international organizations’ current policies and guidelines (Nguyen et al., 2022). This resulted in a total of 39 codes. Next, these codes were examined and collated into patterns of broader meaning, resulting in 7 themes (i.e., principles): 1) Principle of governance and stewardship; 2) Principle of transparency and accountability; 3) Principle of sustainability and proportionality; 4) Principle of privacy; 5) Principle of Security and Safety; 6) Principle of inclusiveness; 7) Principle of human-centered AIED.

An example of policies at the national level comes from Portugal, which in 2021, enacted the “Portuguese Chapter of Human Rights in the Digital Age” law, mandating that artificial intelligence respect fundamental rights by balancing explainability, security, transparency, and responsibility to prevent prejudice and discrimination (Ferrell et al., 2024).

Guidelines and procedures are based on the associated standards and provide context as to how to implement a given standard. A procedure provides detailed mandatory steps (sometimes in the form of a checklist) someone needs to follow to achieve a recurring task or comply with a policy. These procedures can include step-by-step instructions or statements telling you where something needs to go. A procedure informs you how to carry out or implement a policy. Current AI ethics principles are often broad and lack specific guidance on designing and developing (Sanderson et al., 2022).

Roles and responsibilities in AI safety

Continuous monitoring of AI safety not only means adhering to regulations, such as the European Union Artificial Intelligence Act (European, 2024; European Union, 2024), but also building trust and operational integrity for all stakeholders.

Ensuring AI safety requires a systematic view that considers stakeholders' various roles and responsibilities across the AI ​​supply chain (Xia et al., 2024). Figure 3 shows the need for evaluations that span the entirety of the development of lifecycle AI and engage all relevant stakeholders. These stakeholders include:

AI Producer: An entity engaged in the design, development, testing, and supply of AI technologies, including models and components.

AI Provider: An entity that offers AI-driven products or services, including both platform providers and those offering specific AI-based products or services.

AI Partner: An entity offering AI-related services, such as system integration, data provisioning, evaluation, and auditing.

AI Deployer: An organisation that utilised an AI system by making the system or its outputs (e.g., decisions/predictions/recommendations) available to internal or external users (e.g., customers).

AI User: An entity utilizing or relying on an AI system, ranging from organizations (e.g., businesses, governments, non-profits) to individuals or other systems.

In some contexts, an AI organisation user is equivalent to an AI deployer.

Affected Entity: An entity impacted by the decisions or behaviors of an AI system, including organizations, individuals, communities, and other systems.

Source: Xia et al. (2024).

Figure 3 AI System Evaluation 

Implementing responsible AI necessitates comprehending the practices of designers and developers, aligning them with ethical principles, and monitoring user interactions with AI from an ethical AI and human-centered AI perspective (Capel & Brereton, 2023).

UNESCO suggests that government agencies regulate GenAI tools, while educational institutions validate the ethical and pedagogical aspects of these tools (Unesco, 2023).

Generative Ai and Academic Integrity

Generative AI (GenAI) is a type of artificial intelligence (AI) technology that can generate new and unique outputs. GenAI falls under the umbrella of artificial intelligence (Figure 4), that spanning over different computational algorithms capable of performing tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and learning from experience (Banh & Strobel, 2023).

Source: Banh and Strobel (2023).

Figure 4 GenAI and other AI concepts 

GenAI in educational settings is a field that includes aspects such as social interaction, personalized learning, and ethical considerations (Burneo et al., 2025; Kadaruddin, 2023; Moresi et al., 2024).

The advent of AI, particularly GenAI, has profoundly impacted the field of higher education (Teaching, Learning, Assessment and Researching). Academic integrity, as defined by the International Centre for Academic Integrity is the commitment to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage (ICAI, 2021). These principles form the foundation of academic practices. With the development of GenAI, there is increasing discussion about the potential impact on academic integrity (Eke, 2023).

It is necessary to consider ethics and integrity in research, training, and learning (Pedro, 2024; Pedro, 2023; Pretorius, 2023; Nunes,2023). Some challenges in academic integrity and Trust concern mitigating plagiarism risks while designing assessments (Liu et al., 2024; Liu, 2024; Nunes, 2024; Muthanna et al.,2024; Nunes et al., 2024). Particularly in Research context, Ethics is not limited to consent requests approved by ethics committees; it must be present in all research activities and it is necessary to consider three approaches:

a) consider ethics as two structural elements of research that, therefore, need to be in the “foreground”; b) articulate ethics in research with integrity in research; and c) promote ethics and integrity in various instances, to configure an “ecosystem of ethics and integrity” (Mainardes & Comas-Forgas, 2025, p. 4).

Qualitative data analysis with Generative AI

GenAI offers both opportunities and challenges. In research, balancing innovation with responsibility and ethics is crucial. Researchers acknowledge its influence on activities such as summarizing papers, generating text, and programming. According to the European Code of Conduct for Research Integrity (ALLEA, 2023, p. 4), good research practice is grounded in the fundamental principles of research integrity:

  • Reliability in ensuring the quality of research is reflected in the design, the methodology, the analysis, and the use of resources.

  • Honesty in developing, undertaking, reviewing, reporting, and communicating research in a transparent, fair, full, and unbiased way.

  • Respect for colleagues, research participants, society, ecosystems, cultural heritage and the environment.

  • Accountability for research, from idea to publication, for its management and organization, for training, supervision and mentoring, and for its wider impacts.

Researchers acknowledge its influence on activities such as summarizing papers, generating text, and programming. Focus on qualitative research and the type of data used, which is non-numerical and unstructured, considering ethical aspects such as data ownership and privacy has never been more important. Researchers must ensure that “data rights” are respected when using GenAI for analysis and that participants and organizations are informed about potential risks. This increases complexity in data security and participant privacy compared to traditional methods (Davison et al., 2024). GenAI's ethical considerations include interpretive sufficiency, transparency, integrity, objectivity and subjectivity. Researchers must ensure that AI interpretations are free from manipulation, maintain credibility and recognize possible biases to maintain a neutral and impartial stance, upholding ethical standards in scientific research (Friese, 2025). Transformative role of AI in qualitative research, particularly through its ability to visualize complex datasets and foster deeper interpretative analysis. This Humanized AI Paradigm approach, when aligned with robust ethical guidelines, ensures that AI complements rather than replaces human expertise (Bryda & Costa, 2024). The Human-Centered AI (HCAI) framework addresses ethical issues like misinformation and abuse of AI systems while presenting technology as a tool to enhance human agency (Sison et al., 2024).

AI's advantages and societal risks are balanced by frameworks such as the Dual Use Research of Concern (DURC) (Grinbaum & Adomaitis, 2024). AI-driven visual representations that perpetuate gender biases, highlighting the need for non-stereotypical and equitable algorithmic designs (Sandoval-Martin & Martínez-Sanzo, 2024).

In addition, the use of GenAI in qualitative research presents ethical problems such as data protection, privacy, copyright violations, prejudice, misinformation, and social injustice. Authorship and academic integrity are crucial issues, as GenAI can create content that is difficult to attribute correctly, raising questions about authenticity and originality (Davison et al., 2024) . GenAI also presents intellectual property rights and copyright issues. Misinformation and bias are also critical ethical concerns, as technology can propagate existing biases in training data, leading to skewed analysis results (Lucchi, 2024).

Researchers must critically evaluate the results generated by AI to mitigate these biases. To meet these challenges, GenAI must be guided by ethical principles such as human rights, justice and transparency. Integrating GenAI into qualitative research requires a combination of skills to ensure that it is applied effectively and ethically. Both the development of GenAI applications and their use in qualitative research require a dual set of skills for integrating AI into qualitative research. Thus, solid foundations in qualitative research methodologies are essential, and academics must be qualified to design research, collect data, and analyse findings (Mazeikiene & Kasperiuniene, 2024). Using GenAI in qualitative research requires an understanding of how the tools can support processes such as situational analysis, thematic analysis, or grounded theory, data coding, and theme building, making these routine tasks, done manually by academics, more efficient and insightful with technology (A. Christou, 2024; Perkins et al., 2024).

Using GenAI in research efficiently and effectively also implies that researchers need to understand its main concepts, capabilities and limitations. This may involve having knowledge of machine learning, natural language processing and their ethical implications. The development of AI systems requires technical skills, such as the design and fine-tuning of algorithms, and researchers using these systems gain from knowledge of thematic coding (Paulus & Marone, 2024).

While researchers don't need to be AI programmers, they should have a basic understanding of how AI tools work, including their underlying algorithms and data requirements. This knowledge helps to select the right tools and solve technical challenges during research. Developing AI literacy allows researchers to critically evaluate AI tools and decide how well they meet the needs of qualitative research (Ng et al., 2021). In this way, the researcher can turn something known as a black box into a grey box by understanding the possible outcomes of these tools.

AI integration usually involves working in interdisciplinary teams with AI experts, ethicists, and experts in the topic to be investigated (Mazeikiene & Kasperiuniene, 2024). AI literacy is best taught through collaboration across disciplines. Bringing together AI specialists, educators, and ethicists ensures a well-rounded learning experience. By combining insights from fields like computer science, social sciences, and ethics, learners can gain a balanced perspective on AI’s potential and its broader implications (Allen & Kendeou, 2024).

Effective collaboration ensures a holistic approach to research, drawing on diverse expertise to address complex issues. Again, reinforcing what was written above, ethical skills are equally critical. Researchers must consider issues such as data privacy, informed consent, and the biases that AI systems can introduce. By developing ethical competence, they can ensure the responsible use of AI and maintain the integrity of their research. For example, Malakar and Leeladharan (2024) draw attention to ethical issues in collaborative research environments, while Pham and colleagues (2024) emphasize the importance of GDPR compliance in AI-based applications.

We can see GenAI as a research partner in qualitative studies. There is potential for GenAI in qualitative research from its capacity to support data analysis (Dahal, 2024) or investigate grounded theories and improve thematic coding (Christou, 2023; Sinha et al., 2024).

Researchers must maintain a critical perspective when interpreting GenAI results. This means being able to assess the validity and reliability of the insights generated by GenAI, ensuring that they are aligned with the objectives and standards of the research, in this context, qualitative. Similarly, creative capabilities allow developers to innovate tools for generating analogies (Chen & Chan, 2024), which researchers can adapt to improve narrative clarity and engagement. Lastly, the iterative evaluation of AI-generated insights (Nguyen & Nguyen, 2024) and adaptive applications in qualitative data interpretation (Gozali et al., 2024) are made possible by reflective and analytical competencies, which bridge the gap between AI capabilities and humans. Caution against over-reliance, highlighting risks such as de-skilling researchers and ethical dilemmas stemming from AI's "human-like" but inherently mechanical responses (Roberts et al., 2024).

GenAI Governance

If we explore the ethics of taking a socio-technical innovation ecosystem in which AI is realized, we will take the focus from individuals (e.g., developers or users) or organizations or institutions to the broader ecosystem (Stahl, 2023).The question of responsibility for ethical consequences shifts to considering how the ecosystem should be structured to promote positive effects and prevent negative impacts of technology (Stahl & Eke, 2024). Given the importance attributed to artificial intelligence (AI) and due to its transversal and pervasive nature, many governments worldwide have developed national AI strategies (Tulio & Silveira, 2022).

Governance is essential for minimizing negative incidents, fostering trust, and establishing long-term societal stability through the application of well-established tools and design practices (Theodorou & Dignum, 2020).

The emergence and exponential dissemination of GenAI confronts issues that range from the individual user level to the larger ecosystem and society at large. We defend that a governance approach is crucial for understanding and developing a comprehensive GenAI governance model (Pinho et al., 2025). This should involve the consideration of frameworks and components that function at macro, meso, and micro levels, ensuring the critical, responsible, and ethical use of GenAI (Figure5).

Source: Pinho, Costa and Pinho (2025).

Figure 5 Living GenAI Governance Model 

This Living GenAI Governance Model is a structured global view that makes it easier to locate each topic. For example, the topic of this article, ethical and responsible use of AI in a research context - is a complex topic that needs a multidimensional approach. Some questions, derived from this model, guide the scope of the study to be carried out:

How are institutions of higher education learning to tailor GenAI tools' responsible use to suit different purposes and policies?

How to provide ongoing GenAI literacy training for the whole higher education community - researchers, students, teachers, and staff?

Does the new technology help you do specific research activities better?

Conclusion

The use of GenAI in qualitative research can introduce biases that can affect the integrity and impartiality of research results. These biases are often inherent in the training data, reflecting social prejudices and stereotypes. These biases can lead to interpretative insufficiency and compromise the quality and reliability of research results. Addressing these biases is crucial to ensuring the validity and impartiality of such research.

A critical challenge is enhancing GenAI literacy and skills among researchers by implementing training programs and updating ethical and integrity guidelines. There is a need to establish a solid governance structure that includes clear ethical guidelines, risk assessments, and mechanisms to ensure responsible and secure GenAI use.

This article describes the application of the Living GenAI Governance Model in the research context. As diverse lines of research that are developed in response to the use of GenAI in higher education, we can use the model to clarify and structure its workflow.

At the level of implementation this model can guide the integration of various structural dimensions.

Utilizing the Living GenAI Governance Model in diverse contexts, including educational environments, facilitates its ongoing development and strengthens its status as a dynamic, evolving construct.

References

Christou, P. A. (2024, February 9). Thematic Analysis through Artificial Intelligence (AI). The Qualitative Report, 29, 560-576. https://doi.org/10.46743/2160-3715/2024.7046Links ]

ALLEA. (2023). The European Code of Conduct for Research Integrity - Revised Edition 2023. https://doi.org/10.26356/ECOCLinks ]

Allen, L. K., & Kendeou, P. (2024). ED-AI Lit: An Interdisciplinary framework for AI literacy in education. Policy Insights from the Behavioral and Brain Sciences, 11(1), 3-10. [ Links ]

Bai, J. Y. H., Zawacki-Richter, O., & Muskens, W. (2024). Re-Examining the Future Prospects of Artificial Intelligence in Education in Light of the GDPR and ChatGPT. Turkish Online Journal of Distance Education, 25(1), 20-32. <Go to ISI>://WOS:001166125900003. [ Links ]

Banh, L., & Strobel, G. (2023, December 6). Generative artificial intelligence. Electronic Markets, 33(1), 63. https://doi.org/10.1007/s12525-023-00680-1Links ]

Bryda, G., & Costa, A. P. (2024). Transformative Technologies: Artificial Intelligence and Large Language Models in Qualitative Research. Revista Baiana de Enfermagem, 38. https://doi.org/10.18471/rbe.v38.61024Links ]

Burneo, P., Costa, A. P., Pinho, I., Muniz, A. B., & Moresi, E. A. (2025). Competence Frameworks for Exploring Generative AI in Education. In Artificial Intelligence (AI) in Social Research (pp. 160). https://doi.org/10.1079/9781800626607.0015Links ]

Capel, T., & Brereton, M. (2023). What is human-centered about human-centered AI? A map of the research landscape. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. [ Links ]

Chen, Z., & Chan, J. (2024). Large language model in creative work: The role of collaboration modality and user expertise. Management Science, 70(12), 9101-9117. [ Links ]

Christou, P. A. (2023). Ηow to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research? The Qualitative Report, 28(7), 1968-1980. [ Links ]

Dahal, N. (2024). How can generative AI (GenAI) enhance or hinder qualitative studies? A critical appraisal from South Asia, Nepal. The Qualitative Report, 29(3), 722-733. https://www.proquest.com/docview/2955806948/fulltextPDF/EA71AC4E77EC4BD2PQ/1?accountid=26357&sourcetype=Scholarly%20Journals.Links ]

Davison, R. M., Chughtai, H., Nielsen, P., Marabelli, M., Iannacci, F., van Offenbeek, M., Tarafdar, M., Trenz, M., Techatassanasoontorn, A. A., Andrade, A. D., & Panteli, N. (2024, September). The ethics of using generative AI for qualitative data analysis. Information Systems Journal, 34(5), 1433-1439. https://doi.org/10.1111/isj.12504Links ]

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [ Links ]

European, U. (2024). Regulation (EU) 2024/1689 of the European parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) 300/2008, (EU) No 167/2013,(EU) 168/2013,(EU) 2018/858,(EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, shorttitle (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act. URL https://eur-lex. europa. eu/eli/reg/2024/1689/oj.Links ]

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) http://data.europa.eu/eli/reg/2024/1689/oj. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence https://www.europarl.europa.eu/pdfs/news/expert/2023/6/story/20230601STO93804/20230601STO93804_en.pdfhttp://data.europa.eu/eli/reg/2024/1689/oj.Links ]

Ferrell, O., Harrison, D. E., Ferrell, L. K., Ajjan, H., & Hochstein, B. W. (2024). A theoretical framework to guide AI ethical decision making. AMS Review, 14(1), 53-67 https://www.nature.com/articles/s41746-024-01221-6#citeas.Links ]

Friese, S. (2025). Generative AI: A catalyst for paradigmatic change in qualitative data analysis. In Artificial Intelligence (AI) in Social Research (pp. 71-83). CABI. [ Links ]

Gozali, I., Wijaya, A. R. T., Lie, A., Cahyono, B. Y., & Suryati, N. (2024). Leveraging the potential of ChatGPT as an automated writing evaluation (AWE) tool: Students' feedback literacy development and AWE tools integration framework. The JALT CALL Journal, 20(1), 1-22. [ Links ]

Grinbaum, A., & Adomaitis, L. (2024). Dual use concerns of generative AI and large language models. Journal of Responsible Innovation, 11(1), 2304381. https://www.tandfonline.com/doi/epdf/10.1080/23299460.2024.2304381?needAccess=truehttps://www.tandfonline.com/doi/full/10.1080/23299460.2024.2304381.Links ]

ICAI. (2021). The Fundamental Values of Academic Integrity (Vol. 16). International Center for Academic Integrity [ICAI]. https://academicintegrity.org/aws/ICAI/asset_manager/get_file/911282?ver=1.Links ]

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://arxiv.org/pdf/1906.11668 https://www.nature.com/articles/s42256-019-0088-2.Links ]

Kadaruddin, K. (2023). Empowering Education through Generative AI: Innovative Instructional Strategies for Tomorrow's Learners. International Journal of Business, Law, and Education, 4(2), 618 - 625. https://doi.org/10.56442/ijble.v4i2.215Links ]

Liu, J. Q. J., Hui, K. T. K., Al Zoubi, F., Zhou, Z. Z. X., Samartzis, D., Yu, C. C. H., Chang, J. R., & Wong, A. Y. L. (2024, May). The great detectives: humans versus AI detectors in catching large language model-generated medical writing. International Journal for Educational Integrity, 20(1), Article 8. https://doi.org/10.1007/s40979-024-00155-6Links ]

Liu, X. H. (2024). Navigating Uncharted Waters: Teachers' Perceptions of and Reactions to AI-Induced Challenges to Assessment. Asia-Pacific Education Researcher. https://doi.org/10.1007/s40299-024-00890-xLinks ]

Lucchi, N. (2024). ChatGPT: a case study on copyright challenges for generative artificial intelligence systems. European Journal of Risk Regulation, 15(3), 602-624. [ Links ]

Malakar, P., & Leeladharan, M. (2024). Generative AI tools for collaborative content creation: A comparative analysis. DESIDOC Journal of Library & Information Technology, 44(3), 151-157. [ Links ]

Mainardes, J., & Comas Forgas, R. (2025). Apresentação da Seção Temática: Ética e integridade acadêmica e científica. Práxis Educativa, 19, 1-5. https://doi.org/10.5212/PraxEduc.v.19.24470.118Links ]

Mazeikiene, N., & Kasperiuniene, J. (2024). AI-Enhanced Qualitative Research: Insights from Adele Clarke's Situational Analysis of TED Talks. The Qualitative Report, 29(9), 2502-2526. [ Links ]

Mills, S., Baltassis, E., Santinelli, M., Carlisi, C., Duranton, S., & Gallego, A. (2020). Six steps to bridge the responsible AI gap (Boston Consulting Group, Issue. https://web-assets-pdf.bcg.com/prod/six-steps-for-socially-responsible-artificial-intelligence.pdf.Links ]

Moresi, E., Pinho, I., Costa, A. P., Arteaga, P., Machado, L., & Freitas, F. (2024). Bibliometric and Comparative Analysis of Generative Artificial Intelligence In Education Research CISTI, Salamanca, Espanha. https://www.cisti.eu/2024/index.php/en/proceedingsLinks ]

https://itmasoc.org/cisti2024/modules/request.php?module=oc_program&action=summary.php&id=300. [ Links ]

Muthanna, A., Chaaban, Y., & Qadhi, S. (2024, 08/02). Um modelo da inter-relação entre ética em pesquisa e integridade em pesquisa. Práxis Educativa, 19, 1-16. https://doi.org/10.5212/PraxEduc.v.19.23727.079Links ]

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021, 2021/01/01/). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/https://doi.org/10.1016/j.caeai.2021.100041Links ]

Nguyen, A., Ngo, H., Hong, Y., Dang, B., & Nguyen, B.-P. (2022). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28. 18827-18846. https://doi.org/10.1007/s10639-022-11316-wLinks ]

Nguyen, H., & Nguyen, A. (2024, November). Reflective Practices and Self-Regulated Learning in Designing with Generative Artificial Intelligence: An Ordered Network Analysis. Journal of Science Education and Technology. https://doi.org/10.1007/s10956-024-10175-zLinks ]

Nunes, L. (2023). Uma cartografia das Comissões de Ética do Ensino Superior Politécnico em Portugal (2023). Práxis Educativa, 18, 1-23. https://doi.org/10.5212/PraxEduc.v.18.22135.075Links ]

Nunes, L. (2024). Foco ético num bem comum-sobre as estratégias em uso para promover a integridade académica e científica I Colóquio Internacional Ética e Integridade na Investigação em Ciências Humanas e Sociais, Universidade de Aveiro, Portugal https://eticaeintegridade.web.ua.pt/.Links ]

Nunes, L., Carmezim, M., & Fernandes, R. (2024). Código de Ética e Conduta do Instituto Politécnico de Setúbal: anotado e comentado. http://hdl.handle.net/10400.26/53637.Links ]

OECD. (2025). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). https://doi.org/doi:https://doi.org/10.1787/a8d820bd-enLinks ]

Paulus, T. M., & Marone, V. (2024). “In minutes instead of weeks”: Discursive constructions of generative AI and qualitative data analysis. Qualitative Inquiry, 31(1), 395-402. https://doi.org/10.1177/10778004241250065Links ]

Pedro, A. (2024). A ética nos percursos de formação e de investigação em educação. In Aprendizagem, diversidade e equidade: A investigação em educação (p. 92) [ Links ]

Pedro, A. P. d. S. S. (2023). Ética e integridade na investigação e na formação: percursos de um caminho (ainda) por fazer - o caso português. Horizontes, 41(1),1-20, e023049. https://doi.org/10.24933/horizontes.v41i1.1684Links ]

Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024, September). Simple techniques to bypass GenAI text detectors: implications for inclusive education. International Journal of Educational Technology in Higher Education, 21(1), Article 53. https://doi.org/10.1186/s41239-024-00487-wLinks ]

Pham, N. T., Phan, T. H., Bang, N., Hung, N., Trinh, P., Le, N. T., Tran, K. D., & Le, B. K. (2024). GenAI-Powered Analysis of GIS App Privacy Policies for GDPR Compliance. International Conference on Hybrid Artificial Intelligence Systems, [ Links ]

Pinho, I., Costa, A. P., & Pinho, C. (2025). Generative AI Governance Model in Educational Research. Frontiers in Education, 10, 1594343. https://doi.org/doi:10.3389/feduc.2025.1594343Links ]

Pretorius, L. (2023). Fostering AI literacy: A teaching practice reflection. Journal of Academic Language and Learning, 17(1), T1-T8. [ Links ]

Sanderson, C., Lu, Q., Douglas, D., Xu, X., Zhu, L., & Whittle, J. (2022). Towards implementing responsible AI. 2022 IEEE International Conference on Big Data (Big Data). (pp. 5217-5226). IEEE. [ Links ]

Sandoval-Martin, T., & Martínez-Sanzo, E. (2024). Perpetuation of gender bias in visual representation of professions in the generative ai tools dall· e and bing image creator. Social Sciences, 13(5), 250. https://www.mdpi.com/2076-0760/13/5/250.Links ]

Siau, K., & Wang, W. Y. (2020, Apr-Jun). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. Journal of Database Management, 31(2), 74-87. https://doi.org/10.4018/jdm.2020040105Links ]

Sinha, R., Solola, I., Nguyen, H., Swanson, H., & Lawrence, L. (2024). The role of generative AI in qualitative research: GPT-4's contributions to a grounded theory analysis. Proceedings of the 2024 Symposium on Learning, Design and Technology. [ Links ]

Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2024). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human-Computer Interaction, 40(17), 4853-4872. [ Links ]

Stahl, B. C. (2023). Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems. Scientific Reports, 13(1), 7586. https://www.nature.com/articles/s41598-023-34622-w.pdf.Links ]

Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT-Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700Links ]

Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10-12. https://doi.org/10.1038/s42256-019-0136-yLinks ]

Tulio, C., & Silveira, S. A. d. (2022). Exame comparativo das estratégias nacionais de inteligência artificial de Argentina, Brasil, Chile, Colômbia e Coreia do Sul: Consistência do diagnóstico dos problemas-chave identificados. https://hdl.handle.net/10419/284861.Links ]

UN. (1948). Universal declaration of human rights. UN General Assembly, 302(2), 14-25. [ Links ]

Unesco. (2023). Guidance for generative AI in education and research. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000386693.Links ]

Xia, B., Lu, Q., Zhu, L., & Xing, Z. (2024). An ai system evaluation framework for advancing ai safety: Terminology, taxonomy, lifecycle mapping. Proceedings of the 1st ACM International Conference on AI-Powered Software. [ Links ]

Received: May 01, 2025; Accepted: August 08, 2025; Published: August 16, 2025

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License