SciELO - Scientific Electronic Library Online

 
vol.33 número126Práticas na identificação das altas habilidades/superdotação segundo relato de profissionais que atuam na áreaEducar no século XXI: modelos pedagógicos que preparam para a incerteza índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Compartilhar


Ensaio: Avaliação e Políticas Públicas em Educação

versão impressa ISSN 0104-4036versão On-line ISSN 1809-4465

Ensaio: aval. pol. públ. educ. vol.33 no.126 Rio de Janeiro jan./mar 2025  Epub 09-Jan-2025

https://doi.org/10.1590/s0104-40362025003304918 

ARTICLE

Cybernetics of self-regulation, homeostasis, and fuzzy logic: foundational triad for assessing learning using artificial intelligence

Cibernética da auto-regulação, homeostase e lógica difusa: tríade fundamental para avaliar a aprendizagem usando inteligência artificial

Cibernética de autorregulación, homeostasis y lógica difusa: tríada fundamental para evaluar el aprendizaje usando inteligencia artificial

Edinson Oswaldo Delgado Rivas, All authors participate in diverse processes including the literature review, data collection, data analysis, and discussion of results, manuscript preparation, and text writinga 

Edinson Oswaldo Delgado Rivas: MSc. in Interdisciplinary Complexity Studies, with experience in research on Pedagogical Innovation and Education 4.0.


http://orcid.org/0000-0003-4736-0436

Andrés Chiappe, All authors participate in diverse processes including the literature review, data collection, data analysis, and discussion of results, manuscript preparation, and text writing, Corresponding authorb 

Andrés Chiappe: PhD in Educational Sciences. Senior Professor and Researcher at the Universidad de La Sabana. Director of Doctorate in Educational Innovation using ICT.


http://orcid.org/0000-0002-9664-4833

Angélica Vera Sagredo, All authors participate in diverse processes including the literature review, data collection, data analysis, and discussion of results, manuscript preparation, and text writingc 

Angélica Vera Sagredo: Professor, Bachelor’s degree in Education, PhD in Education. Director of the Graduate School of the Faculty of Education at Universidad Católica de la Santíssima Concepción (UCSC).


http://orcid.org/0000-0003-1657-2241

a Fundación Universitaria Navarra, Uninavarra, Neiva, Colombia.

b Universidad de La Sabana, Chía, Colombia.

c Universidad Católica de la Santísima Concepción, Concepción, Chile.


Abstract

Today’s Education is increasingly mediated by digital technologies that imply new challenges that need to be addressed in detail to turn them into opportunities for advancement and evolution. Such is the case of the use of artificial intelligence in learning assessment processes, which is forcing us to rethink traditional methods, mechanisms, and strategies to assess student learning achievement, especially in distance and online Education. Given the complexity of the above, this analytical essay proposes a look at artificial intelligence developments that support the so-called “evaluation 4.0”, based on the application of fuzzy logic, homeostasis, and the cybernetics of self-regulation. Such an application would provide technical support and a general understanding framework for the evaluation processes for both teachers and students to promote evaluation processes more in line with the flexible and often imprecise and ambiguous nature of the learning and performance associated with the skills assessment in the framework of the fourth industrial revolution.

Key words: Assessment 4.0; Distance Education; Online Education; Cybernetics of Self-Regulation; Fuzzy Logic; Metacognition; Feedback

Resumo

A Educação atual está cada vez mais mediada por tecnologias digitais que implicam novos desafios que precisam ser abordados detalhadamente para transformá-los em oportunidades de avanço e evolução. Tal é o caso da inteligência artificial nos processos de avaliação da aprendizagem, o que nos obriga a repensar métodos, mecanismos e estratégias tradicionais para avaliar o sucesso da aprendizagem dos alunos, especialmente na Educação a Distância e online. Dada a complexidade do exposto, este ensaio analítico propõe uma análise dos desenvolvimentos da inteligência artificial que apoiam a chamada “avaliação 4.0”, com base na aplicação da lógica difusa, homeostase e na cibernética da auto-regulação. Tal aplicação forneceria suporte técnico e um quadro de entendimento geral para os processos de Avaliação, tanto para os professores quanto para os alunos, objetivando promover processos de avaliação mais alinhados com a natureza flexível e muitas vezes imprecisa e ambígua da aprendizagem e desempenho, associados à avaliação de habilidades no âmbito da quarta revolução industrial.

Palavras-Chave: Avaliação 4.0; Educação a Distância; Educação Online; Cibernética da Auto-Regulação; Lógica Difusa; Metacognição; Feedback

Resumen

La educación actual está cada vez más mediada por tecnologías digitales que implican nuevos desafíos que necesitan ser abordados en detalle para convertirlos en oportunidades de avance y evolución. Tal es el caso del uso de inteligencia artificial en los procesos de evaluación del aprendizaje, lo que nos obliga a repensar métodos, mecanismos y estrategias tradicionales para evaluar el logro del aprendizaje de los estudiantes, especialmente en la educación a distancia y en línea. Dada la complejidad de lo anterior, este ensayo analítico ha propuesto un análisis de los desarrollos de inteligencia artificial que respaldan la llamada “evaluación 4.0”, basada en la aplicación de la lógica difusa, homeostasis y la cibernética de autorregulación. Tal aplicación proporcionaría soporte técnico y un marco de entendimiento general para los procesos de evaluación tanto para profesores como para estudiantes, para promover procesos de evaluación más acordes con la naturaleza flexible y a menudo imprecisa y ambigua del aprendizaje y el rendimiento asociados con la evaluación de habilidades en el marco de la cuarta revolución industrial.

Palabras-clave: Evaluación 4.0; Educación a Distancia; Educación en Línea; Cibernética de Autorregulación; Lógica Difusa; Metacognición; Retroalimentación

1 Introduction

The four documented industrial revolutions represent key milestones in human history and have profoundly transformed the way we produce, consume, interact, and educate ourselves. According to Lee et al. (2018), the first industrial revolution was characterized by the introduction of the steam engine and the mechanization of production; the second focused on mass production and electrification of industry; the third brought about digitization and automation of production, and the fourth industrial revolution, or Industry 4.0, centers on the integration of emerging technologies such as artificial intelligence, robotics, and the Internet of Things into production and business management processes (Patiño; Ramírez-Montoya; Buenestado-Fernández, 2023).

According to Chituc (2021), in this context, the term “Education 4.0” arises, which is used to describe the transformation of educational systems in response to the fourth industrial revolution. Said transformation is due, in part, to the need to prepare students for the challenges of the future, including the increase in automation and digitization of the economy, as well as the demands of an increasingly competitive and changing labor market (Akimov et al., 2023).

In this regard, it is important to understand the challenges of learning in digital distance environments (Zapata-Ros, 2018). Specifically, Jurado Valencia (2016) emphasizes the weakness of pedagogical training among university professors and the excessive standardization of assessment as two of the most relevant factors contributing to the phenomenon of dropout rates during the first two years of Higher Education, which can be extended to the context of interaction in digital environments.

Taking the above into consideration, it is worth highlighting the various issues associated with the assessment of learning that have drawn attention in educational research in recent decades regarding these learning environments. In this sense, complex evaluative phenomena such as fraud (Martinez; Ramírez, 2017), information plagiarism (Chaika et al., 2023), identity impersonation (Pfeiffer et al., 2020), lack of self-assessment culture (Sanz-Benito et al., 2023), absence of learning visibility strategies beyond grade analytics (Cabra-Torres, 2010), and difficulty in making autonomous decisions by students (Gowin; Millman, 1981) are highlighted.

In addition to the above, authors such as Rodríguez (2013), Ruíz Martín (2020), Viñolas and Sepulveda (2022) and Wu and Gun (2021), and emphasize a weakness in the designs of learning assessment in terms of repetitiveness, monotony, or mechanization of assessment activities. Other authors, such as Mamani Choque et al. (2022), focus their attention on deficiencies in timely and effective feedback. Finally, another group of researchers, like Iafrancesco Villegas (2017), highlight the need to create much more empathetic assessment spaces where there is closer interaction between students and teachers, and the importance of learning styles is recognized.

On the other hand, in a general sense, it can be stated that the assessment of learning has been a topic of growing interest among the community of teachers and researchers in Education, as evidenced in Figure 1, which shows the number of articles related to learning assessment processes in the context of Distance and Technology-Enhanced Learning environments that were published in peer-reviewed journals indexed in Scopus. While this Figure demonstrates a growing trend parallel to the development and evolution of digital technologies, the low number of publications per year indicates that it is still a topic with ample room for educational research.

Source: Scopus (2024)

Figure 1 Peer-reviewed articles about the Assessment of learning and ICT published in Scopus-indexed journals 

In this context, the need to effectively assess learning by implementing new approaches, logic, tools, and metrics in the evaluation process becomes particularly important.

1.1 Some brief insights on assessment in the era of artificial intelligence

Assessing learning in the era of artificial intelligence presents both opportunities and significant challenges, especially considering the capacity of artificial intelligence to transform how information is currently collected, analyzed, and utilized to assess student learning (Bitencourt; Silva; Xavier, 2022; Diyer; Achtaich; Najib, 2020; Parreira; Lehmann; Oliveira, 2021).

In general, experts in the field, such as Salazar, Ovalle and De La Prieta (2019) or Duque-Méndez, Tabares-Morales and Ovalle (2020), indicate that one key advantage of artificial intelligence in learning assessment is its ability to efficiently and accurately process large volumes of data. In this sense, machine learning algorithms designed to support assessment processes should have the capacity to analyze complex patterns in the data generated by students and provide valuable information about their performance and progress (Grimalt-Álvaro; Usart, 2024). This would assist educators not only in accurately reporting learning outcomes but also in making better-informed and more personalized decisions regarding individualized teaching and support for each student (Kaliwal; Deshpande, 2021).

However, there are also inherent challenges in AI-based learning assessment, such as ensuring the validity, reliability, and transparency of the results generated by automated assessment systems, as well as addressing ethical concerns related to the privacy of data collected and analyzed in these processes (Guan; Feng; Islam, 2023). Furthermore, finding an appropriate balance between the involvement of automated systems and human input in learning assessment processes is challenging. While artificial intelligence can provide valuable insights, we believe it should not completely replace the evaluation conducted by educators and experts in the field. In this regard, we endorse the views of Tataw (2023) and Burgess and Rowsell (2020) that learning assessment should be a holistic process that considers both quantitative and qualitative aspects of learning, including multiple variables and purposes.

Building upon the aforementioned, in this article, we aim to propose a perspective for approaching the creation of assessment support systems that utilize artificial intelligence, based on articulating three key concepts: cybernetics of self-regulation, homeostasis, and fuzzy logic.

1.2 Cybernetics of self-regulation and homeostasis: a response to cognitive imbalance

For a long time, a dichotomous relationship between the body and mind in human beings has been addressed, where although related, bodily processes have been treated differently from cognitive processes (Berent, 2023). However, as human beings are integrated entities, characterized by the intimate connection between bodily and mental functions, they cannot be separated, as they function together to shape our experience and existence (Bernier; Carlson; Whipple, 2010). For instance, bodily functions such as respiration, digestion, and movement are closely linked to our mental functions such as thinking, emotion, and perception (Trevarthen, 2012). In other words, our mental states influence our physical well-being, and vice versa; this interconnectedness reflects the complexity and holistic unity of the human being as an integral entity.

Taking the above into account, it is relevant to recall and explore the term “Homeostasis” and its application within the framework of this article. According to Kelkar (2021), Homeostasis is a fundamental principle in both biology and psychology, which describes the inherent tendency of living organisms to maintain a state of internal balance to self-regulate physiological and psychological variables and ensure optimal functioning.

When approached within the context of learning, homeostasis assumes a particularly interesting dimension, as it pertains to the equilibrium sought by the cognitive system to restore lost conceptual harmony due to the impact of cognitive imbalance generated by the learning process (Ciaunica et al., 2021). To understand this, it is important to recall that, according to Jean Piaget’s theory of cognitive development, cognitive disequilibrium is a state of conflict or discrepancy between existing cognitive structures and new experiences or information, which is a necessary condition for cognitive growth and development as it motivates individuals to adjust and reorganize their cognitive schemes to achieve a new equilibrium (Goswami; Chen; Dubrawski, 2020). This process of self-regulation allows students to harmoniously integrate new knowledge, connecting it with their existing knowledge base and constructing a deeper and more comprehensive understanding.

Now, having addressed homeostasis and cognitive disequilibrium, we will introduce the Cybernetics of Self-regulation to position this argument within the framework of digital systems, particularly in the context of using artificial intelligence. According to Mackenzie, Mezo and Francis (2012), cybernetics of self-regulation is a concept that refers to the process by which self-regulating systems, such as human beings (and now, certain developments in artificial intelligence), adjust their behavior in response to deviations between established goals and achieved outcomes. In the context of learning, Zachariou et al. (2023) and Prather et al. (2020), indicate that self-regulation involves students’ ability to monitor, regulate, and adjust their cognitive processes and behavior to achieve better performance, which is closely linked to metacognition.

1.3 So, what relationship emerges between homeostasis, cybernetics of self-regulation, and cognitive disequilibrium?

Up to this point, it has been posited that the learning process generates cognitive imbalances that need to be consistently resolved to consolidate the outcomes of such learning, a phenomenon that has been studied as an educational phenomenon for several decades (Goswami; Chen; Dubrawski, 2020; Ward; Pellett; Perez, 2017). It is within this framework that cognitive homeostasis and cybernetics of self-regulation emerge as key issues; the former as a process that capitalizes on the consolidation of learning, and the latter as a pathway to enhance all of the aforementioned through the use of automated digital systems, also referred to as “intelligent” systems.

To conclude the exploration of the previous question, one final key concept emerges: feedback. According to Ackerman, Vance and Ball (2016), self-regulation cannot be effectively accomplished without appropriate feedback. From this perspective, feedback plays a crucial role by providing students with clear guidance on their performance and by giving them specific information about their strengths and areas for improvement, enabling them to make the necessary changes to restore cognitive equilibrium (Lodge et al., 2018).

Within the framework of cybernetics of self-regulation, feedback would be considered a process mediated by intelligent systems, linking teachers, classmates, and students, and providing them with an external and impartial perspective on their performance. This perspective would offer a better-informed condition to correct errors, strengthen knowledge, acquire new skills, reflect on learning, assess progress, and establish realistic goals for the future to achieve cognitive homeostasis.

1.4 Fuzzy logic: addressing the challenges of competency assessment in Assessment 4.0

Considering the above, one of the main challenges lies in developing these intelligent systems that underpin self-regulation and homeostasis processes and enable the generation of effective feedback to support teachers’ work and students’ learning processes.

It is at this point that we introduce both “fuzzy logic” and “competency assessment” as the conceptual framework for the development of these AI-based support systems.

In the mid-1960s, mathematician, and engineer Lotfi Zadeh, considered the father of Fuzzy Logic, proposed its main tenets. According to Jamaaluddin et al. (2019), fuzzy logic is a type of logic that enables reasoning and decision-making in situations involving imprecision or uncertainty. Unlike traditional logic, which uses binary values (true/false), fuzzy logic allows for the representation and management of imprecision and vagueness found in many real-world problems.

Furthermore, Renkas and Niewiadomski (2014) indicate that in fuzzy logic, truth values are expressed in terms of degrees of membership in fuzzy sets, where elements can have partial membership, ranging from 0 to 1, indicating the extent to which an element belongs to the set. This type of partial membership allows for a more suitable representation of uncertainty and imprecision compared to classical logic (Eshuis; Firat; Kaymak, 2021).

Fuzzy logic has been applied in various areas, including artificial intelligence, control systems, decision-making, robotics, and many others (Sousa; Nunes; Lopes, 2015). Its flexibility and ability to deal with uncertainty make it particularly useful in situations where data is incomplete, ambiguous, or subjective, which is often the case in educational processes and, more specifically, in learning assessments.

In this regard, Boychenko et al. (2021) acknowledge the relevance of fuzzy logic for competency assessment processes, as competencies are manifested through performances and are evaluated on a scale, rather than in a binary manner. This means that competence is not simply present or absent but rather developed and positioned at a certain level on the scale at the time of assessment.

Another relevant aspect of fuzzy logic to add to this analysis relates to the fact that competencies are typically assessed using rubrics, which are instruments that consider multiple levels of competence and establish descriptors for each level, allowing for the estimation of the correspondence between descriptors and the performance to be evaluated for a student (Chanchí; Sierra; Campo, 2021). However, as performances are inherently complex, it is common for them to not fully align with a single descriptor or to contain elements from multiple descriptors. This is where fuzzy logic would play a key role in evaluating these performances by considering multiple variables that determine different degrees of membership to various descriptors within the rubric (Rao; Mangalwede; Deshmukh, 2018).

Furthermore, within the current framework of the close relationship between assessment and promotion, where learning assessment ultimately results in a binary pass/fail judgment, allowing students to progress to the next level in their educational journey, it is important to highlight that managing uncertainty and imprecision through the application of fuzzy logic in rubric-based assessment would provide a logical basis for reaching the final binary judgment while accounting for the inherent complexity of student performance.

According to Schembari and Jochen (2013), traditional assessment methods rely on weighted averages and classical logic to measure learning outcomes, legitimizing the learning process in curricula. However, as discussed in this article, fuzzy logic offers a more suitable framework for AI-based assessment systems, given the ambiguous and complex nature of evaluation. Traditional methods often lack sufficient evidence of acquired learning, and when multiple assessors are involved, differences in experience and expertise can lead to varying assessments of the same learning. Fuzzy logic addresses these complexities by offering more nuanced, adaptable decision-making processes.

As an example, Figure 2 presents a case in which a student has a final weighted score of 2.9, which, under classical logic, corresponds to the “Fail” category since, according to institutional rules, a course is passed only if the minimum weighted score is equal to or higher than 3.0. However, in some cases, academic recording systems are programmed to “round up” grades when they are close to passing thresholds, and such processes are even performed directly by instructors in other instances.

Source: Own elaboration (2024)

Figure 2 Fuzzy interval [2.6, 3.0] 

Now, what would be the pedagogical argument or reason that a teacher or an academic recording system would have to decide to increase the final grade from two point nine (2.9) to three point zero (3.0) and thereby change the classification from “Fail” to “Pass”? Furthermore, if the previous example included multiple students, assuming that each student has different abilities and limitations inherent to their individuality as individuals, how can we be sure that the reasons for raising or not raising a grade correspond to the realities of these students?

Considering the above, we can indicate that the assessment of a student using classical metrics (weighted averages) particularly reveals two difficulties: firstly, the numerical assessment assigned by an expert using classical metrics can be imprecise and vary among different assessors depending on their experience and socio-affective factors; and secondly, the qualitative assessment using linguistic labels (Pass-Fail; Low, Basic, High, Excellent, etc.) assumes low levels of precision depending on each assessor’s perception and understanding of these labels.

To appropriately address situations like the ones mentioned above, we propose the implementation of an expert recommendation system based on the use of artificial intelligence, through the application of fuzzy logic. This system would enable an “intelligent” and unbiased evaluation of academic and metacognitive processes in students, allowing for an evolutionary, comprehensive, and homeostatic assessment.

In this regard, Figure 3, based on the work of Pitalúa-Díaz et al. (2009), illustrates the stages of reasoning and information processing in a fuzzy expert system. This system consists of four components: a fuzzification interface, a knowledge base, a decision-making unit, and a defuzzification interface.

Source: Own elaboration (2024)

Figure 3 Components of a fuzzy reasoning system 

In the initial stage, the fuzzification interface measures the input variable values of the system and maps them to a fuzzy discourse universe, transferring the range of values to linguistic terms. Fuzzification transforms the input data into linguistic values (Thaker; Nagori, 2018). The second component is the knowledge base, which contains general information about the system. This knowledge base consists of a structured fuzzy database composed of membership functions or degrees of membership to fuzzy sets, assigned intermediate values between zero (0) and one (1), and “if-then” propositions (Chanchí; Sierra; Campo, 2021). Additionally, it includes a set of linguistic rules that control the system’s variables. The third component is the decision-making unit, which simulates the reasoning and logic used by humans when assessing a learning process, (Eshuis; Firat; Kaymak, 2021). Finally, the fourth component is the defuzzification interface, which performs the mapping that transforms the range of output variable values back to their corresponding discourse universes. It converts fuzzy results into understandable numerical values for current academic recording systems (Obregón; Romero, 2013).

In this context, Figure 4 illustrates the variables considered in the design of a fuzzy expert system for assessing learning outcomes based on homeostatic processes inherent in self-regulation cybernetics, through reflective evaluation grounded in evidence. The system consists of two input variables: performance in the final exam (EF-H)(x) and the assessment of the level of metacognition achieved through feedback from a portfolio of evidence provided by students (PM-P)(x). The output variable is the final approval concept (CF)(x).

Source: Own elaboration (2024)

Figure 4 Variables of the fuzzy logic intelligent system 

To represent the degrees of membership of each input variable in the fuzzy sets defined by membership functions, Figure 5 displays the different types of fuzzy sets commonly used to link, match, interconnect, or correspond to the considered fuzzy values. These sets are as follows: Right-shoulder or right-saturation membership function, Left-shoulder or left-saturation membership function, Triangular membership function, Trapezoidal membership function, Gaussian membership function, and Gamma membership function.

Source: Own elaboration (2024)

Figure 5 Sets of membership or membership functions based on mathematical models. 

Figure 6 displays the membership functions of the input and output fuzzy sets of the fuzzy expert system. In this system, right-shoulder and left-saturation, triangular, and trapezoidal membership functions were used.

Source: Own elaboration (2024)

Figure 6 Fuzzy input and output sets of the system 

On the other hand, Table 1 presents a decision matrix for the input and output variables of the system, consolidating the base of sixteen (16) fuzzy rules (expert knowledge). For example, rule number five (5) in the structured fuzzy rule base corresponds to R3: if there is evidence of a “BAD” level in “FINAL TEST” and a “BEST” level in metacognition, then the student is classified as “APPROVED.”

Table 1 Decision matrix for the construction of fuzzy rules of the system 

  Final Test
Metacognition   BAD AVERAGE GOOD BEST
BAD FAILED APPROVED APPROVED APPROVED
AVERAGE FAILED APPROVED APPROVED APPROVED
GOOD APPROVED APPROVED APPROVED APPROVED
BEST APPROVED APPROVED APPROVED APPROVED

Source: Own elaboration (2024)

On the other hand, Figure 7 illustrates the action of the defuzzification interface, which transforms the output of the fuzzy system into a numerical result for the aforementioned case of a student who has a weighted score of 2.9 on the final exam and is classified as FAILED in the final concept. In the context of a homeostatic and reflective evaluation system based on self-regulation cybernetics that values metacognitive processes, this student is assessed with an additional dimension (metacognition) using a different logic than the classical one (fuzzy logic), resulting in an APPROVED outcome.

Source: Own elaboration (2024)

Figure 7 Defuzzification interface action 

In this regard, for the aforementioned case, according to Table 1, which displays the rule base of the fuzzy inference system, rules 1, 2, 3, 4, 5, 6, 7, 8, 11, and 15 have been activated. However, rules three and seven were primarily activated: R3 = if there is evidence of a “BAD” level in the “final exam” (with a definitive grade of 2.9) and a “BEST” level in metacognition (80%), then the student is APPROVED; and rule seven, R7 = if there is evidence of a “BAD” level in the “final exam” (with a definitive grade of 2.9), although it is a low performance according to classical logic, it corresponds much more to a “BASIC” level according to fuzzy logic (see Figure 2), and a “BEST2 level in metacognition (80%), then the student is APPROVED.

1.5 A fuzzy logic-based perspective about Evaluation 4.0

Considering what has been mentioned so far, Evaluation 4.0 can be regarded as a holistic, integral, nonlinear, systemic, evidence-based, and non-standardized process characterized primarily by:

  1. Assessing the degree of interconnection of key knowledge necessary to perform adequately in the era of the Fourth Industrial Revolution, including prior knowledge and 21st-century skills, with the new information perceived by the student when faced with a specific learning situation (Ruíz Martín, 2020).

  2. The implementation of Artificial Intelligence (AI) through the integration of homeostatic intelligent systems, by employing non-classical logic for the assessment and decision-making regarding learning outcomes.

In this regard, this dual circumstance serves as the framework for optimizing the evaluation of learning processes in the context of Education 4.0. Importantly, this gains value when it becomes evident that students, through their executive functions, transition from external feedback to self-feedback, from external regulation to self-regulation, and from external motivation to self-motivation in their learning processes (Bernier; Carlson; Whipple, 2010).

The cybernetics of self-regulation involves training students to explicitly internalize and externalize metacognitive processes by continuously monitoring their learning strategies. This enables them to self-evaluate and reflect on both the strengths and areas for improvement in their study methods. The primary goal of Evaluation 4.0 is to establish an effective self-regulation system, guiding students toward cognitive homeostasis. To achieve this, learners must first identify their learning objectives and motivations. They then plan, organize, and structure their strategies and tools in a systematic manner. Finally, they establish evaluation criteria to independently address challenges throughout the learning process (Martín Celis; Cárdenas, 2014).

2 Conclusions

Given this panorama of inferences and conceptual reflections, it is worth noting the following practical implications of implementing Evaluation 4.0 in the classroom:

Evaluation 4.0 shifts focus from traditional memorization to assessing critical and reflective thinking, where students ask meaningful questions and solve problems creatively. It emphasizes collaborative skills, communication (oral, written, and listening), and essential qualities like perseverance, resilience, self-discipline, and empathy. The goal is to assess what students can do with their knowledge, valuing practical application.

This new approach requires a transformation in traditional evaluation, leveraging artificial intelligence tools like fuzzy expert systems to enhance decision-making in assessing learning outcomes. To implement Evaluation 4.0 effectively, classrooms must first foster a culture of metacognition, encouraging self-assessment and peer evaluation, and allowing time for students to develop metacognitive reasoning. This approach aims to deepen students’ understanding and improve learning strategies.

Ultimately, Evaluation 4.0, from a homeostatic perspective based on the cybernetics of self-regulation, structures and strengthens (in a bio-inspired manner) the mental tools necessary to develop the executive capacities of any citizen in the 21st century, turning the evaluation process into a true learning experience rather than merely an event to validate or verify what has been learned (Noor, 2019). Thus, it configures evaluation as another opportunity for learning.

In the context of Education 4.0, as mentioned by Ramírez-Montoya et al. (2021), the assessment of learning must take a transformative role, aligning with the shifting focus, tools, and methodologies of teaching in the digital age. Moreover, the educational community recognizes that evaluation is not a standalone activity but an integral part of the learning process. In this direction, by embracing a homeostatic approach informed by cybernetics and self-regulation, Evaluation 4.0 acknowledges the interconnectedness of various aspects of learning and leverages this understanding to optimize educational outcomes.

Regarding the above, Taheri, Gonzalez Bocanegra and Taheri (2022) indicate that one significant advantage of Evaluation 4.0 lies in the integration of artificial intelligence, and, from our perspective, if these intelligent systems act like fuzzy expert systems, they will provide educators with enhanced decision-making capabilities, allowing for a more nuanced assessment of metacognitive and cognitive processes. In this sense, by leveraging AI technologies, teachers can gather and analyze comprehensive data, gaining valuable insights into students’ learning progress and tailoring instructional approaches to meet their individual needs (Ovinova; Shraiber, 2019).

Moreover, Evaluation 4.0 extends beyond the traditional concept of assessment as a one-time event with predetermined criteria and recognizes that learning is a dynamic and iterative process, characterized by continuous growth and improvement. Therefore, as mentioned by Oliveira and Souza (2021), assessment of learning in the context of Education 4.0 encourages ongoing self-reflection, self-evaluation, and self-adjustment. Students are actively involved in their learning journey, engaging in metacognitive practices and developing the ability to monitor, regulate, and adapt their learning strategies to achieve optimal outcomes.

Thus, from a prospective research outlook, it is crucial to invest in the development and implementation of intelligent systems that support learning assessment across various educational levels. This involves designing and refining AI-powered tools that can provide accurate and reliable feedback to both students and educators, being versatile enough to cater to diverse student populations, and should align with the specific goals and objectives of different educational contexts.

To ensure the effectiveness of such intelligent systems, rigorous research is required. Studies should explore the impact of AI-powered assessment tools on student learning outcomes, engagement, and motivation and additionally, it is essential to revise the ethical implications of AI integration in Education, ensuring fairness, transparency, and data privacy (Tiwari et al., 2022).

Evaluation 4.0, supported by fuzzy intelligent systems, creates a learning environment focused on continuous improvement and personalized experiences. It equips students with 21st-century skills, promoting active participation, critical thinking, and creative problem-solving. This evaluation method fosters cognitive homeostasis, where students adapt and refine their strategies for optimized learning.

Successful implementation requires a multifaceted approach that emphasizes collaboration, advanced cognitive skills, and the integration of intelligent technologies. Fuzzy expert systems provide personalized feedback, identifying areas for improvement and offering tailored recommendations. However, rigorous research is necessary to evaluate the impact of these technologies and address ethical concerns.

Grounded in homeostasis and self-regulation, Evaluation 4.0 redefines assessment, transforming it into a meaningful learning experience. As we navigate the Fourth Industrial Revolution, investing in intelligent systems and fostering metacognitive practices will prepare learners for future challenges and opportunities.

Acknowledgments

We thank both Fundación Universidad de La Sabana (Group Technologies for the Academia - Proventus (EDUPHD-20-2022)) and Universitaria Navarra – Uninavarra (Group Navarra Education and Digital Technologies) for the received support in the preparation of this article.

References

ACKERMAN, M. L.; VANCE, D. E.; BALL, K. K. What factors influence the relationship between feedback on cognitive performance and subsequent driving self-regulation? Journal of Applied Gerontology, Thousand Oaks, v. 35, n. 6, p. 653-663, June 2016. https://doi.org/10.1177/0733464814529473Links ]

AKIMOV, N. et al. Components of education 4.0 in open innovation competence frameworks: systematic review. Journal of Open Innovation: Technology, Market, and Complexity, [s. l.], v. 9, n. 2, p. 100037, June 2023. https://doi.org/10.1016/j.joitmc.2023.100037Links ]

BERENT, I. The illusion of the mind-body divide is attenuated in males. Scientific Reports, [s. l.], v. 13, n. 1, p. 6653, Apr. 2023. https://doi.org/10.1038/s41598-023-33079-1Links ]

BERNIER, A.; CARLSON, S. M.; WHIPPLE, N. From external regulation to self-regulation: early parenting precursors of young children's executive functioning. Child Development, [s. l.], v. 81, n. 1, p. 326-339, Jan. 2010. https://doi.org/ 10.1111/j.1467-8624.2009.01397.x [ Links ]

BITENCOURT, W. A.; SILVA, D. M.; XAVIER, G. D. C. Pode a inteligência artificial apoiar ações contra evasão escolar universitária? Ensaio: Avaliação e Políticas Públicas em Educação, Rio de Janeiro, v. 30, n. 116, p. 669-694, jul./set. 2022. https://doi.org/10.1590/S0104-403620220003002854Links ]

BOYCHENKO, O. V. et al. Fuzzy set theory in determining learning process effectiveness. [S. l.: s. n.], 2021. Available from: https://ceur-ws.org/Vol-2834/Paper38.pdf. Access: 2023 Oct 26. [ Links ]

BURGESS, J.; ROWSELL, J. Transcultural-affective flows and multimodal engagements: reimagining pedagogy and assessment with adult language learners. Language and Education, [s. l.], v. 34, n. 2, p. 173-191, mar. 2020. https://doi.org/10.1080/09500782.2020.1720226Links ]

CABRA-TORRES, F. Dialogue as the Foundation for Ethical Communication in Evaluation. Educación y Educadores, Bogotá, v. 13, n. 2, p. 239-252, Aug. 2010. [ Links ]

CHAIKA, O. et al. Zero tolerance to plagiarism in multicultural teamwork: challenges for English-Speaking non-EU and EU Academics. World Journal of English Language, [s. l.], v. 13, n. 4, p. 14, Apr. 2023. https://doi.org/10.5430/wjel.v13n4p14Links ]

CHANCHÍ, G. E.; SIERRA, L. M.; CAMPO, W. Y. Application of fuzzy logic in the implementation of evaluation rubrics in the university context. RISTI: Revista Iberica de Sistemas e Tecnologias de Informacao, [s. l.], v. E42, p. 164-177, 2021. [ Links ]

CHITUC, C.-M. A Framework for Education 4.0 in digital education ecosystems. In: CAMARINHA-MATOS, L. M.; BOUCHER, X.; AFSARMANESH, H. (eds.). Smart and Sustainable collaborative Networks 4.0. IFIP advances in Information and communication technology. Cham: Springer International, 2021. p. 702-709. [ Links ]

CIAUNICA, A. et al. The first prior: from co-embodiment to co-homeostasis in early life. Consciousness and Cognition, San Diego, v. 91, p. 103117, May 2021. https://doi.org/10.1016/j.concog.2021.103117Links ]

DIYER, O.; ACHTAICH, N.; NAJIB, K. Artificial Intelligence in learning skills assessment: a pedagogical innovation. In: 3rd International Conference on Networking, Information Systems & Security, 3., Marrajecgm 2020. Proceedings [...]. Available from: https://dl.acm.org/doi/10.1145/3386723.3387901. Access: 2023 June 30. [ Links ]

DUQUE-MÉNDEZ, N. D.; TABARES-MORALES, V.; OVALLE, D. A. Intelligent agents system for adaptive assessment. In: GENNARI, R. et al. (eds.). Methodologies and intelligent systems for technology enhanced learning. In: 9th International Conference. Advances in Intelligent Systems and Computing. Cham: Springer International, 2020. p. 164-172. [ Links ]

ESHUIS, R.; FIRAT, M.; KAYMAK, U. Modeling uncertainty in declarative artifact-centric process models using fuzzy logic. Information Sciences, [s. l.], v. 579, p. 845-862, Nov. 2021. https://doi.org/10.1016/j.ins.2021.07.075Links ]

GOSWAMI, M.; CHEN, L.; DUBRAWSKI, A. Discriminating cognitive disequilibrium and flow in problem solving: a semi-supervised approach using involuntary dynamic behavioral signals. In: AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 34. New York: AAAI, 2020. [ Links ]

GOWIN, D. B.; MILLMAN, J. Toward reform of program evaluation: Lee J. Cronbach, S. R. Ambron, S. M. Dornbusch, R. D. Hess, R. C. Hornik, D. C. Phillips, D. F. Walker, and S. S. Weiner. Educational Evaluation and Policy Analysis, v. 3, n. 6, p. 85-87, Nov. 1981. https://doi.org/10.3102/01623737003006085Links ]

GRIMALT-ÁLVARO, C.; USART, M. Sentiment analysis for formative assessment in higher education: a systematic literature review. Journal of Computing in Higher Education, New York, v. 36, p. 647-682, Apr. 2024. https://doi.org/10.1007/s12528-023-09370-5Links ]

GUAN, X.; FENG, X.; ISLAM, A. Y. M. A. The dilemma and countermeasures of educational data ethics in the age of intelligence. Humanities and Social Sciences Communications, v. 10, n. 1, p. 138, 1 Apr. 2023. https://doi.org/10.1057/s41599-023-01633-xLinks ]

IAFRANCESCO VILLEGAS, G. M. Propuesta de modelo holístico para la evaluación integral y de los aprendizajes en una escuela transformadora. Revista Paca, [s. l.], n. 8, p. 34-50, June. 2017. [ Links ]

JAMAALUDDIN, J. et al. The utilization of levelled fuzzy logic for more precision results. Journal of Physics: Conference Series, Bristol, v. 1402, n. 7, 2019. https://doi.org/10.1088/1742-6596/1402/7/077037Links ]

JURADO VALENCIA, F. Hacia la renovación de la formación de los docentes en Colombia: ruta tradicional y ruta polivalente. Pedagogía y Saberes, [s. l.], n. 45, p. 11-22, Dec. 2016. [ Links ]

KALIWAL, R. B.; DESHPANDE, S. L. Assessment study for e-learning using bayesian network. In: 2021 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SMART SYSTEMS (ICAIS), 2021. Coimbatore, India, 2021. Available from: https://ieeexplore.ieee.org/document/9395830/. Access: 2023 Jul 1. [ Links ]

KELKAR, A. Cognitive homeostatic agents. In: INTERNATIONAL JOINT CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS - AAMAS. Proceedings […]. [S. l.]: International Foundation for Autonomous Agents and Multiagent Systems, 2021. [ Links ]

LEE, M. et al. How to respond to the Fourth Industrial Revolution, or the Second Information Technology Revolution? Dynamic new combinations between technology, market, and society through open innovation. Journal of Open Innovation: Technology, Market, and Complexity, [s. l.], v. 4, n. 3, p. 21, Sep. 2018. https://doi.org/10.3390/joitmc4030021Links ]

LODGE, J. M. et al. Understanding Difficulties and resulting confusion in learning: an integrative review. Frontiers in Education, Lausanne, v. 3, Article 49, June 2018. https://doi.org/10.3389/feduc.2018.00049Links ]

MACKENZIE, M. B.; MEZO, P. G.; FRANCIS, S. E. A conceptual framework for understanding self-regulation in adults. New Ideas in Psychology, Oxford, v. 30, n. 2, p. 155-165, Aug. 2012. https://doi.org/10.1016/j.newideapsych.2011.07.001Links ]

MAMANI CHOQUE, C. A. et al. Retroalimentación según los efectos en el aprendizaje en la educación virtual. EDUCATECONCIENCIA, [s. l.], v. 30, n. 34, p. 241-265, 2022. https://doi.org/10.58299/9xze2k48Links ]

MARTÍN CELIS, Y. M.; CÁRDENAS, M. L. Promoting adolescent EFL students' decision-making through work plans gathered in their portfolios. Folios, Bogotá, n. 39, p. 89-105, Jan. 2014. [ Links ]

MARTINEZ, L.; RAMÍREZ, E. Fraude académico en universitarios en Colombia: ¿Qué tan crónica es la enfermedad? Educação e Pesquisa, v. 44, n. 0, June 2017. https://doi.org/10.1590/S1517-9702201706157079Links ]

NOOR, I. Using evaluation as a learning process in post-conflict Somalia. Journal of Somali Studies, v. 6, n. 2, p. 75-101, Dec 2019. https://doi.org/10.31920/2056-5682/2019/6n2a4Links ]

OBREGÓN, N.; ROMERO, J. Aplicaciones de sistemas inteligentes en Ingeniería Agrícola. Neiva: Universidad SurColombiana, 2013. [ Links ]

OLIVEIRA, K. K. D. S.; SOUZA, R. A. C. Digital transformation towards Education 4.0. Informatics in Education, [s. l.], v. 21, n 2, p. 283-309, 2022. https://doi.org/10.15388/infedu.2022.13Links ]

OVINOVA, L. N.; SHRAIBER, E. G. Pedagogical model to train specialists for Industry 4.0 at University. Perspectives of Science and Education, [s. l.], v. 40, n. 4, p. 448-461, Sep. 2019. https://doi.org/10.32744/pse.2019.4.34Links ]

PARREIRA, A.; LEHMANN, L.; OLIVEIRA, M. O desafio das tecnologias de inteligência artificial na Educação: percepção e avaliação dos professores. Ensaio: Avaliação e Políticas Públicas em Educação, Rio de Janeiro, v. 29, n. 113, p. 975-999, Dec. 2021. https://doi.org/10.1590/S0104-40362020002803115Links ]

PATIÑO, A.; RAMÍREZ-MONTOYA, M. S.; BUENESTADO-FERNÁNDEZ, M. Active learning and education 4.0 for complex thinking training: analysis of two case studies in open education. Smart Learning Environments, [s. l.], v. 10, n. 1, p. 8-25, Jan. 2023. https://doi.org/10.1186/s40561-023-00229-xLinks ]

PFEIFFER, A. et al. Blockchain technologies for the validation, verification, authentication and storing of students' data. In: EUROPEAN CONFERENCE ON E-LEARNING, ECEL, 19., 2020, Berlin. Berlin: Academic Conferences and Publishing International Limited, 2020. Proceedings […]. [ Links ]

PRATHER, J. et al. What do we think we think we are doing?: metacognition and self-regulation in programming. In: 2020 ACM CONFERENCE ON INTERNATIONAL COMPUTING EDUCATION RESEARCH, 2020. Proceedings [...] .New Zealand: ACM, 2020. [ Links ]

RAMÍREZ-MONTOYA, M. S. et al. Characterization of the Teaching Profile within the Framework of Education 4.0. Future Internet, [s. l.], v. 13, n. 4, p. 91, 1 Apr. 2021. https://doi.org/10.3390/fi13040091Links ]

RAO, D. H.; MANGALWEDE, S. R.; DESHMUKH, V. B. Student performance evaluation model based on scoring rubric tool for network analysis subject using fuzzy logic. In: INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONICS, COMMUNICATION COMPUTER TECHNOLOGIES AND OPTIMIZATION TECHNIQUES, ICEECCOT 2017, \Mysuru, India. 2018. [ Links ]

RENKAS, K.; NIEWIADOMSKI, A. Hierarchical fuzzy logic systems: current research and perspectives. Cham: Springer, 2014. p. 295-306. (Lecture Notes in Computer Science, v. 8467). [ Links ]

RODRÍGUEZ, O. R. La evaluación objetiva en ingeniería: aportes en procesos de evaluación y mejora curricular. In: WORLD ENGINEERING EDUCATION FORUM, 2013, Cartagena, Colombia. Bogotá: Asociación Colomgiana de Facultades de Ingeniería, 2013. [ Links ]

RUÍZ MARTÍN, H. ¿Cómo aprendemos?: una aproximación científica al aprendizaje y la enseñanza. 2nd ed. Barcelona: Graó, 2020. [ Links ]

SALAZAR, O. M.; OVALLE, D. A.; DE LA PRIETA, F. Towards an adaptive and personalized assessment model based on ontologies, context and collaborative filtering. In: RODRÍGUEZ, S. et al. (eds.). Distributed computing and artificial intelligence., 15.,. Cham: Springer International, 2019. p. 311-314. (Advances in intelligent systems and computing, v. 801). [ Links ]

SANZ-BENITO, I. et al. Formar y evaluar competencias en educación superior: una experiencia sobre inclusión digital. RIED: Revista Iberoamericana de Educación a Distancia, v. 26, n. 2, p. 199-217, Mar 2023. https://doi.org/10.5944/ried.26.2.35791Links ]

SCHEMBARI, N. P.; JOCHEN, M. The assessment of learning outcomes in information assurance curriculum. In: INFORMATION SECURITY CURRICULUM DEVELOPMENT CONFERENCE, 2013, Kennesaw, USA. Proceedings […]. [ Links ]

SOUSA, S. D. T.; NUNES, E. M. P.; LOPES, I. S. Uncertainty characterization of performance measure: a fuzzy logic approach. In: TRANSACTIONS ON ENGINEERING TECHNOLOGIES: WORLD CONGRESS ON ENGINEERING AND COMPUTER SCIENCE 2014. [S. l: s. n.], 2014. p. 485-499. [ Links ]

TAHERI, H.; GONZALEZ BOCANEGRA, M.; TAHERI, M. Artificial intelligence, machine learning and smart technologies for nondestructive evaluation. Sensors, Basel, v. 22, n. 11, p. 4055, May 2022. https://doi.org/10.3390/s22114055Links ]

TATAW, D. B. Holistic evaluation of a team-lecture hybrid (TLH) instructional design applied in a public affairs course. Journal of Research in Innovative Teaching & Learning, [s. l.], May 2023. https://doi.org/10.1108/JRIT-01-2023-0007Links ]

THAKER, S.; NAGORI, V. Analysis of fuzzification process in fuzzy expert system. Procedia Computer Science, [Amsterdam], v. 132, p. 1308-1316, 2018. https://doi.org/10.1016/j.procs.2018.05.047Links ]

TIWARI, R. G. et al. Education 4.0: classification of student adaptability level in E-Education. In:10th INTERNATIONAL CONFERENCE ON RELIABILITY, INFOCOM TECHNOLOGIES AND OPTIMIZATION, 10., 2022, Noida, India. [ Links ]

TREVARTHEN, C. Embodied human intersubjectivity: Imaginative agency, to share meaning. Cognitive Semiotics, [s. l.], v. 4, n. 1, p. 6-56, 2012. https://doi.org/10.1515/cogsem.2012.4.1.6Links ]

VIÑOLAS, E.; SEPULVEDA, L. Gamificación y la evaluación de los aprendizajes en educación superior., págs. 273-286. In: ACTAS DEL CONGRESO INTERNACIONAL DE INNOVACIÓN, CIENCIA Y TECNOLOGÍA (INUDI - UH, 2022). Anais [...]. Lima: Instituto Universitario de Innovación Ciencia y Tecnología Inudi Perú, 2022. p. 273-286. [ Links ]

WARD, S.; PELLETT, H. H.; PEREZ, M. I. Cognitive disequilibrium and service-learning in physical education teacher education: perceptions of pre-service teachers in a study abroad experience. Journal of Teaching in Physical Education, Champaign, v. 36, n. 1, p. 70-82, 2017. https://doi.org/10.1123/jtpe.2015-0006Links ]

WU, S. L.; GUN, C. H. Will one be better than two? Exploring Project-Based Joint Assessment during a Pandemic in Higher Education: Learning assessments. 2021 In: INTERNATIONAL CONFERENCE ON EDUCATION AND MULTIMEDIA TECHNOLOGY, 5., Kyoto Japan,. [S. l.]: ACM, 2021. [ Links ]

ZACHARIOU, A. et al. Exploring the effects of a musical play intervention on young children's self-regulation and metacognition. Metacognition and Learning, New York, v. 18, p. 983-1012, May 2023. https://doi.org/10.1007/s11409-023-09342-1Links ]

ZAPATA-ROS, M. La universidad inteligente: la transición de los LMS a los Sistemas Inteligentes de Aprendizaje en Educación Superior. RED: Revista de Educación a Distancia, [s. l.], n. 57, Mar. 2018. https://doi.org/10.6018/red/57/10Links ]

data-in-article

Data: The set of data that supports the results of this review can be found directly in the sources stated in the article.

Financing: No funding was received for the manuscript preparation and publishing nor its previous research processes.

Received: May 02, 2024; Accepted: October 08, 2024

Conflict of interest:

The authors declare that he has no commercial or associative interest that represents a conflict of interest concerning the manuscript.

Creative Commons License  This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.