Ethics and AI
Ethics is a part of philosophy concerned with what is morally good and bad, right and wrong. Ethics can be divided into two main dimensions: normative ethics, which examines societal norms or rules, and applied ethics, which studies ethical issues in specific contexts. In the context of AI, we can divide it into two dimensions: ethics of AI and ethical AI (Siau & Wang, 2020).
The ethics of AI are part of the ethics of advanced technology that focus on robots and other artificially intelligent agents. They can be divided into robot ethics and machine ethics. Roboethics is concerned with the moral behaviors of humans as they design, construct, use, and interact with AI agents and the associated impacts of robots on humanity and society. Machine ethics deals with the moral behaviors of Artificial Moral Agents (AMAs), the field of research addressing the design of artificial moral agents. As technology advances and robots become more intelligent, robots or artificially intelligent agents should behave morally and exhibit moral values.
From Values to Practice
The normative content of the documents concerning ethical AI can span diverse concepts: values, principles, policies, standards, and guidelines. The relationship between these concepts can be summarized as follows: Values provide the underlying beliefs that inform the development of principles. Principles offer a framework for interpreting and applying values consistently. Policies operationalize values and principles into specific actions and rules to guide behavior and decision-making within organizations or societies. These concepts can be organized hierarchically, from an abstract level to an applied level (see Figure 1).
Values are fundamental beliefs that guide behavior and decision-making. Principles are permanent, universal, non-negotiable standards based on ethical and legal foundations (Ferrell et al. 2024). Principles are the fundamental truths or propositions that serve as the foundation for a system of belief or behavior. They are often derived from values and provide a framework for making decisions. The Universal Declaration of Human Rights' guiding principles have been adopted into many national constitutions and legal frameworks all around the world (UN, 1948).
The OECD AI Principles are the first intergovernmental standard on AI (OECD, 2025). They promote innovative, trustworthy AI that respects human rights and democratic values. Adopted in 2019 and updated in 2024, they are composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors (see Figure 2).
Everyone is entitled to personal data protection. This value is the starting point of the General Data Protection Regulation (GDPR), effective from 25 May 2018, which provides a legal framework for personal data protection in the EU, including data processed by AI systems. Access to data is crucial for the development of Artificial Intelligence in Education (AIEd). This requires contrasting regulatory approaches, from laissez-faire to heavily regulated options. Laissez-faire environments may facilitate the collection of more learner data for AIEd research but increase privacy risks and data misuse. Conversely, regulatory environments prioritizing privacy and data protection may enforce restrictions that limit some AIEd applications (Bai et al., 2024).
One of the first studies to promote a meta-analysis of published AI ethical principles was Anna Jobin and colleagues (2019). They identified 84 documents and presented 11 ethical principles: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity. They are core principles because they invoke those values that theories in moral and political philosophy argue to be intrinsically valuable, meaning their value is not derived from something else.
At Artificial Intelligence in Education (AIED), a study explored ethical concerns by mapping and analyzing international organizations’ current policies and guidelines (Nguyen et al., 2022). This resulted in a total of 39 codes. Next, these codes were examined and collated into patterns of broader meaning, resulting in 7 themes (i.e., principles): 1) Principle of governance and stewardship; 2) Principle of transparency and accountability; 3) Principle of sustainability and proportionality; 4) Principle of privacy; 5) Principle of Security and Safety; 6) Principle of inclusiveness; 7) Principle of human-centered AIED.
An example of policies at the national level comes from Portugal, which in 2021, enacted the “Portuguese Chapter of Human Rights in the Digital Age” law, mandating that artificial intelligence respect fundamental rights by balancing explainability, security, transparency, and responsibility to prevent prejudice and discrimination (Ferrell et al., 2024).
Guidelines and procedures are based on the associated standards and provide context as to how to implement a given standard. A procedure provides detailed mandatory steps (sometimes in the form of a checklist) someone needs to follow to achieve a recurring task or comply with a policy. These procedures can include step-by-step instructions or statements telling you where something needs to go. A procedure informs you how to carry out or implement a policy. Current AI ethics principles are often broad and lack specific guidance on designing and developing (Sanderson et al., 2022).
Roles and responsibilities in AI safety
Continuous monitoring of AI safety not only means adhering to regulations, such as the European Union Artificial Intelligence Act (European, 2024; European Union, 2024), but also building trust and operational integrity for all stakeholders.
Ensuring AI safety requires a systematic view that considers stakeholders' various roles and responsibilities across the AI supply chain (Xia et al., 2024). Figure 3 shows the need for evaluations that span the entirety of the development of lifecycle AI and engage all relevant stakeholders. These stakeholders include:
AI Producer: An entity engaged in the design, development, testing, and supply of AI technologies, including models and components.
AI Provider: An entity that offers AI-driven products or services, including both platform providers and those offering specific AI-based products or services.
AI Partner: An entity offering AI-related services, such as system integration, data provisioning, evaluation, and auditing.
AI Deployer: An organisation that utilised an AI system by making the system or its outputs (e.g., decisions/predictions/recommendations) available to internal or external users (e.g., customers).
AI User: An entity utilizing or relying on an AI system, ranging from organizations (e.g., businesses, governments, non-profits) to individuals or other systems.
In some contexts, an AI organisation user is equivalent to an AI deployer.
Affected Entity: An entity impacted by the decisions or behaviors of an AI system, including organizations, individuals, communities, and other systems.
Implementing responsible AI necessitates comprehending the practices of designers and developers, aligning them with ethical principles, and monitoring user interactions with AI from an ethical AI and human-centered AI perspective (Capel & Brereton, 2023).
UNESCO suggests that government agencies regulate GenAI tools, while educational institutions validate the ethical and pedagogical aspects of these tools (Unesco, 2023).
Generative Ai and Academic Integrity
Generative AI (GenAI) is a type of artificial intelligence (AI) technology that can generate new and unique outputs. GenAI falls under the umbrella of artificial intelligence (Figure 4), that spanning over different computational algorithms capable of performing tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and learning from experience (Banh & Strobel, 2023).
GenAI in educational settings is a field that includes aspects such as social interaction, personalized learning, and ethical considerations (Burneo et al., 2025; Kadaruddin, 2023; Moresi et al., 2024).
The advent of AI, particularly GenAI, has profoundly impacted the field of higher education (Teaching, Learning, Assessment and Researching). Academic integrity, as defined by the International Centre for Academic Integrity is the commitment to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage (ICAI, 2021). These principles form the foundation of academic practices. With the development of GenAI, there is increasing discussion about the potential impact on academic integrity (Eke, 2023).
It is necessary to consider ethics and integrity in research, training, and learning (Pedro, 2024; Pedro, 2023; Pretorius, 2023; Nunes,2023). Some challenges in academic integrity and Trust concern mitigating plagiarism risks while designing assessments (Liu et al., 2024; Liu, 2024; Nunes, 2024; Muthanna et al.,2024; Nunes et al., 2024). Particularly in Research context, Ethics is not limited to consent requests approved by ethics committees; it must be present in all research activities and it is necessary to consider three approaches:
a) consider ethics as two structural elements of research that, therefore, need to be in the “foreground”; b) articulate ethics in research with integrity in research; and c) promote ethics and integrity in various instances, to configure an “ecosystem of ethics and integrity” (Mainardes & Comas-Forgas, 2025, p. 4).
Qualitative data analysis with Generative AI
GenAI offers both opportunities and challenges. In research, balancing innovation with responsibility and ethics is crucial. Researchers acknowledge its influence on activities such as summarizing papers, generating text, and programming. According to the European Code of Conduct for Research Integrity (ALLEA, 2023, p. 4), good research practice is grounded in the fundamental principles of research integrity:
Reliability in ensuring the quality of research is reflected in the design, the methodology, the analysis, and the use of resources.
Honesty in developing, undertaking, reviewing, reporting, and communicating research in a transparent, fair, full, and unbiased way.
Respect for colleagues, research participants, society, ecosystems, cultural heritage and the environment.
Accountability for research, from idea to publication, for its management and organization, for training, supervision and mentoring, and for its wider impacts.
Researchers acknowledge its influence on activities such as summarizing papers, generating text, and programming. Focus on qualitative research and the type of data used, which is non-numerical and unstructured, considering ethical aspects such as data ownership and privacy has never been more important. Researchers must ensure that “data rights” are respected when using GenAI for analysis and that participants and organizations are informed about potential risks. This increases complexity in data security and participant privacy compared to traditional methods (Davison et al., 2024). GenAI's ethical considerations include interpretive sufficiency, transparency, integrity, objectivity and subjectivity. Researchers must ensure that AI interpretations are free from manipulation, maintain credibility and recognize possible biases to maintain a neutral and impartial stance, upholding ethical standards in scientific research (Friese, 2025). Transformative role of AI in qualitative research, particularly through its ability to visualize complex datasets and foster deeper interpretative analysis. This Humanized AI Paradigm approach, when aligned with robust ethical guidelines, ensures that AI complements rather than replaces human expertise (Bryda & Costa, 2024). The Human-Centered AI (HCAI) framework addresses ethical issues like misinformation and abuse of AI systems while presenting technology as a tool to enhance human agency (Sison et al., 2024).
AI's advantages and societal risks are balanced by frameworks such as the Dual Use Research of Concern (DURC) (Grinbaum & Adomaitis, 2024). AI-driven visual representations that perpetuate gender biases, highlighting the need for non-stereotypical and equitable algorithmic designs (Sandoval-Martin & Martínez-Sanzo, 2024).
In addition, the use of GenAI in qualitative research presents ethical problems such as data protection, privacy, copyright violations, prejudice, misinformation, and social injustice. Authorship and academic integrity are crucial issues, as GenAI can create content that is difficult to attribute correctly, raising questions about authenticity and originality (Davison et al., 2024) . GenAI also presents intellectual property rights and copyright issues. Misinformation and bias are also critical ethical concerns, as technology can propagate existing biases in training data, leading to skewed analysis results (Lucchi, 2024).
Researchers must critically evaluate the results generated by AI to mitigate these biases. To meet these challenges, GenAI must be guided by ethical principles such as human rights, justice and transparency. Integrating GenAI into qualitative research requires a combination of skills to ensure that it is applied effectively and ethically. Both the development of GenAI applications and their use in qualitative research require a dual set of skills for integrating AI into qualitative research. Thus, solid foundations in qualitative research methodologies are essential, and academics must be qualified to design research, collect data, and analyse findings (Mazeikiene & Kasperiuniene, 2024). Using GenAI in qualitative research requires an understanding of how the tools can support processes such as situational analysis, thematic analysis, or grounded theory, data coding, and theme building, making these routine tasks, done manually by academics, more efficient and insightful with technology (A. Christou, 2024; Perkins et al., 2024).
Using GenAI in research efficiently and effectively also implies that researchers need to understand its main concepts, capabilities and limitations. This may involve having knowledge of machine learning, natural language processing and their ethical implications. The development of AI systems requires technical skills, such as the design and fine-tuning of algorithms, and researchers using these systems gain from knowledge of thematic coding (Paulus & Marone, 2024).
While researchers don't need to be AI programmers, they should have a basic understanding of how AI tools work, including their underlying algorithms and data requirements. This knowledge helps to select the right tools and solve technical challenges during research. Developing AI literacy allows researchers to critically evaluate AI tools and decide how well they meet the needs of qualitative research (Ng et al., 2021). In this way, the researcher can turn something known as a black box into a grey box by understanding the possible outcomes of these tools.
AI integration usually involves working in interdisciplinary teams with AI experts, ethicists, and experts in the topic to be investigated (Mazeikiene & Kasperiuniene, 2024). AI literacy is best taught through collaboration across disciplines. Bringing together AI specialists, educators, and ethicists ensures a well-rounded learning experience. By combining insights from fields like computer science, social sciences, and ethics, learners can gain a balanced perspective on AI’s potential and its broader implications (Allen & Kendeou, 2024).
Effective collaboration ensures a holistic approach to research, drawing on diverse expertise to address complex issues. Again, reinforcing what was written above, ethical skills are equally critical. Researchers must consider issues such as data privacy, informed consent, and the biases that AI systems can introduce. By developing ethical competence, they can ensure the responsible use of AI and maintain the integrity of their research. For example, Malakar and Leeladharan (2024) draw attention to ethical issues in collaborative research environments, while Pham and colleagues (2024) emphasize the importance of GDPR compliance in AI-based applications.
We can see GenAI as a research partner in qualitative studies. There is potential for GenAI in qualitative research from its capacity to support data analysis (Dahal, 2024) or investigate grounded theories and improve thematic coding (Christou, 2023; Sinha et al., 2024).
Researchers must maintain a critical perspective when interpreting GenAI results. This means being able to assess the validity and reliability of the insights generated by GenAI, ensuring that they are aligned with the objectives and standards of the research, in this context, qualitative. Similarly, creative capabilities allow developers to innovate tools for generating analogies (Chen & Chan, 2024), which researchers can adapt to improve narrative clarity and engagement. Lastly, the iterative evaluation of AI-generated insights (Nguyen & Nguyen, 2024) and adaptive applications in qualitative data interpretation (Gozali et al., 2024) are made possible by reflective and analytical competencies, which bridge the gap between AI capabilities and humans. Caution against over-reliance, highlighting risks such as de-skilling researchers and ethical dilemmas stemming from AI's "human-like" but inherently mechanical responses (Roberts et al., 2024).
GenAI Governance
If we explore the ethics of taking a socio-technical innovation ecosystem in which AI is realized, we will take the focus from individuals (e.g., developers or users) or organizations or institutions to the broader ecosystem (Stahl, 2023).The question of responsibility for ethical consequences shifts to considering how the ecosystem should be structured to promote positive effects and prevent negative impacts of technology (Stahl & Eke, 2024). Given the importance attributed to artificial intelligence (AI) and due to its transversal and pervasive nature, many governments worldwide have developed national AI strategies (Tulio & Silveira, 2022).
Governance is essential for minimizing negative incidents, fostering trust, and establishing long-term societal stability through the application of well-established tools and design practices (Theodorou & Dignum, 2020).
The emergence and exponential dissemination of GenAI confronts issues that range from the individual user level to the larger ecosystem and society at large. We defend that a governance approach is crucial for understanding and developing a comprehensive GenAI governance model (Pinho et al., 2025). This should involve the consideration of frameworks and components that function at macro, meso, and micro levels, ensuring the critical, responsible, and ethical use of GenAI (Figure5).
This Living GenAI Governance Model is a structured global view that makes it easier to locate each topic. For example, the topic of this article, ethical and responsible use of AI in a research context - is a complex topic that needs a multidimensional approach. Some questions, derived from this model, guide the scope of the study to be carried out:
How are institutions of higher education learning to tailor GenAI tools' responsible use to suit different purposes and policies?
How to provide ongoing GenAI literacy training for the whole higher education community - researchers, students, teachers, and staff?
Does the new technology help you do specific research activities better?
Conclusion
The use of GenAI in qualitative research can introduce biases that can affect the integrity and impartiality of research results. These biases are often inherent in the training data, reflecting social prejudices and stereotypes. These biases can lead to interpretative insufficiency and compromise the quality and reliability of research results. Addressing these biases is crucial to ensuring the validity and impartiality of such research.
A critical challenge is enhancing GenAI literacy and skills among researchers by implementing training programs and updating ethical and integrity guidelines. There is a need to establish a solid governance structure that includes clear ethical guidelines, risk assessments, and mechanisms to ensure responsible and secure GenAI use.
This article describes the application of the Living GenAI Governance Model in the research context. As diverse lines of research that are developed in response to the use of GenAI in higher education, we can use the model to clarify and structure its workflow.
At the level of implementation this model can guide the integration of various structural dimensions.
Utilizing the Living GenAI Governance Model in diverse contexts, including educational environments, facilitates its ongoing development and strengthens its status as a dynamic, evolving construct.



















