SciELO - Scientific Electronic Library Online

 
vol.26A amarga experiência de racismo em Moçambique: a cor da pele, a educação e a sobrevivência do mais forteA leitura na Educação de Jovens e Adultos: uma experiência pedagógica para a formação de leitores mediada com revistas índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Compartilhar


Revista de Educação PUC-Campinas

versão impressa ISSN 1519-3993versão On-line ISSN 2318-0870

Educ. Puc. vol.26  Campinas  2021

https://doi.org/10.24220/2318-0870v26e2021a5312 

Artigos

Brazil’s National System for the evaluation of Higher Education: context, challenges, and perspectives

Sistema Brasileiro de avaliação da Educação Superior: contexto, desafios e perspectivas

Robert Evan Verhine1 
http://orcid.org/0000-0002-5157-3680

Lys Maria Vinhaes Dantas2 
http://orcid.org/0000-0001-8225-2321

1Universidade Federal da Bahia, Faculdade de Educação, Programa de Pós-Graduação em Educação. Av. Reitor Miguel Calmon, s/n., Campus Canela, Vale do Canela, 40110-100, Salvador, BA, Brasil.

2Universidade Federal do Recôncavo da Bahia, Centro de Artes, Humanidades e Letras, Área Ciências Sociais Aplicadas. Cachoeira, BA, Brasil.


Abstract

This article discusses the implementation of the National System for the Evaluation of Higher Education from its inception in 2004 to present times, paying special attention to the advances it achieved and the challenges that it must meet in the near future. After reviewing international perspectives on Higher Education quality assurance, the text examines adjustments to operationalize the implementation of the National System for the Evaluation of Higher Education model, highlighting the importance of improving the System’s self-evaluation component. The article concludes by addressing the challenges that must still be faced, such as the inclusion of state Higher Education systems in National System for the Evaluation of Higher Education, the improvement of indicators and external the evaluators, the effective utilization of the evaluation’s results, the need to distinguish evaluation processes from regulation policies, and the possibility of transforming the existing framework into a multidimensional evaluation model.

Keywords Evaluation policy; Large-scale evaluation; Quality assurance

Resumo

Este trabalho apresenta uma análise sobre o Sistema Nacional de Avaliação da Educação Superior desde sua implantação, em 2004, até o presente, dando especial atenção aos avanços já obtidos e aos desafios a serem enfrentados em um futuro próximo. Após uma discussão inicial, de perspectiva internacional, sobre a qualidade da educação superior, o texto examina os ajustes que têm sido feitos no processo de implementação do Sistema, quando é dada ênfase à importância do aprimoramento do seu componente de autoavaliação. Para concluir, o artigo relaciona os desafios que ainda devem ser enfrentados, como a inclusão no Sistema Nacional de Avaliação da Educação Superior dos sistemas estaduais de educação, o refinamento dos indicadores e a capacitação/moderação de avaliadores externos, o uso efetivo dos resultados das avaliações, a necessidade de distinção entre políticas avaliativas e políticas regulatórias, e a possibilidade de transformação do arcabouço hoje utilizado em um modelo de avaliação multidimensional.

Palavras-chave Políticas de avaliação; Avaliação em larga escala; Garantia de qualidade

Introduction

Of the 85 institutions of the Western world that have existed since the 15th century, 70 are universities (Kerr, 1982). However, the great majority of today’s universities are relatively new, having been established in the latter 20th century. Newer still are large-scale evaluations of Higher Education designed to ensure institutional quality in a systematic, large-scale, and legitimate fashion.

The worldwide concern for Higher Education quality assurance first became dominant in the 1980s and 1990s, related to a more general tendency to promote public-service accountability through the creation of what has been labeled the “evaluation state” (Dias Sobrinho, 2003). The systematic external evaluation of universities can be traced to the late 19th century, when universities in the United States, in a decentralized and self-governing manner, first created regional accreditation associations which they themselves financed and made responsible for conferring institutional legitimacy. These non-governmental associations were pioneers in using visits by external commissions composed of peers from the academic community to carry out the evaluation process. This approach responded to a discovery through experience, according to which academics only accepted external evaluation when conducted by fellow academics (Rhodes; Sporn, 2002).

Unlike in the US, in most parts of the world, universities were designed to be both highly selective and publicly managed. These two characteristics (selectivity and public control) were generally viewed as sufficient to assure adequate quality. However, this understanding began to change in the 1970s and 1980s due to several interrelated factors. A first factor was the relative massification of Higher Education, fueled by a burgeoning demand due to new labor market conditions and rapid secondary-level expansion. Between 1980 and 2000, international Higher Education enrollments quadrupled, and as a result, the assumption of quality guaranteed by exclusivity was undermined (Brennan; Shah, 2001). The rapid expansion also provoked higher costs for education (due to greater competition for scarce resources). Consequently, greater public concern for accountability and transparency concerning institutional management ensued. Also, more Higher Education students meant greater student diversity, which in turn led to more variety in Higher Education offerings. Thus, potential students were given a wider range of choices, and these choices required more information about the nature and quality of Higher Education options.

These tendencies involving enrollment expansion, student diversification, and increasing costs provoked the governments of many countries to give universities, mostly public at the time, greater operational autonomy. That was done under the assumption that decentralized decision-making would lead to a quicker response to local demands, more rational use of public funding, and a willingness by institutions to seek additional funding from other sources. However, the allowance of greater institutional autonomy was accompanied by demands for quality and accountability, and these demands, in conjunction with the tendencies mentioned above, fueled the need for national strategies to ensure Higher Education quality (Brennan, 1997; King, 2007; Lim, 2017; Verhine; Freitas, 2012).

In Europe, large-scale evaluation of Higher Education was further promoted by the Bologna agreement that sought to standardize the value of diplomas received in the participating countries to facilitate the flow of students and workers across borders (Thune, 1998). France, Holland, and the United Kingdom were the first countries to create national quality assurance agencies (European Commission, 1995). By 2005, all European nations and most Asian and Latin American ones had followed suit (Billing, 2004). Although such agencies differ from country to country, research reveals that five characteristics prevail: (1) coordination by a specialized, legally constituted national entity; (2) emphasis on institutional self-evaluation; (3) external evaluation by academic peers, conducted subsequently to the self-evaluation process; (4) publication of the evaluation’s results; and (5) little or no relationship between the evaluation findings and the allocation of public resources (van Vught; Westerheijden, 1993).

In Brazil, there are two national Higher Education evaluation systems, one mandated by law since 2004 that focuses on institutions (both federal and private) and undergraduate programs, and the other, which began in 1980, that deals exclusively with the quality of the graduate study. The graduate-level model, as described by Verhine (2008), deviates from the commonly found characteristics described above in many respects. On the other hand, the institutional/undergraduate approach, known as Sistema Nacional de Avaliação da Educação Superior (SINAES, National System for the Evaluation of Higher Education), adopts most international tendencies but with important specificities. This article discusses the SINAES model in terms of key lessons emerging from its 17 years of existence. After briefly reviewing the model’s structure and organization, the article focuses on the adjustments made in the original framework to ensure its full implementation. It then presents the challenges that remain for continuous improvement of the SINAES approach. The text pays special attention to the consolidation of institutional self-evaluation. It concludes by addressing other problems that must be overcome in the near future to enable SINAES to effectively assure and promote the quality of Higher Education in Brazil.

The SINAES Model

Brazil’s SINAES was installed by a national law in 2004 (Law nº 10.861, April 14, 2004), with the primary objective of improving Higher Education’s institutional and academic quality and social contribution. The SINAES model was built upon prior national experiences, including an effort to promote institutional self-evaluation, a national test of concluding students’ learning achievement, and visits by peer commissions to judge on site the adequacy of graduate programs in terms of human and physical infrastructure.

However, SINAES went beyond previous initiatives by seeking to link both formative and summative evaluation with government regulation in an integrated manner. The System is structured in accordance with three components, dealing with institutional, program, and student achievement, respectively. Evaluating the learning achievements of concluding undergraduate students is its most original part in international perspective, involving the annual application of a national examination that, over a three-year cycle, covers over 60 professional fields, being a mandatory requisite for student graduation. It is also very controversial and the subject of much of the academic literature on SINAES (Verhine; Dantas, 2009; Verhine; Dantas; Soares, 2006).

It is often forgotten that the original conception of the SINAES model focuses on processes of self-evaluation. According to the official documents, those processes are designed to promote on the institutional level, participatory involvement in global analyses that consider the structures, activities, relationships, and social responsibilities associated with Higher Education quality.

As established by national laws, SINAES is operationalized by the Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (INEP, National Institute for Studies and Research) and coordinated by the Comissão Nacional de Avaliação da Educação Superior (CONAES, National Commission for the Evaluation of Higher Education), a body of thirteen members that formulates directives to promote the evaluation’s theory and practice in an articulated manner. Both the external and internal components of the SINAES evaluation address 10 institutional dimensions, grouped into the categories of (1) planning and evaluation, (2) academic quality, (3) administrative quality, and (4) physical infrastructure.

Initiatives Adopted to Implement SINAES

Although carefully conceived and theoretically grounded, the SINAES model proved complicated to implement. The evaluation of student achievement through a national test was operationalized immediately, in large part because it built upon a national test structure created in 1995. Institutional self-evaluation was also initiated quickly, as evidenced by the fact that by 2006, most Higher Education institutions in the country had installed an evaluation commission and had sent their required evaluation reports to the Ministry of Education (Brasil, 2011).

However, the processes of external evaluation focusing both on institutions and undergraduate programs were implemented more slowly and with great difficulty. External evaluators had to be recruited and trained, evaluation instruments had to be formulated, tested, and reformulated, and a logistical infrastructure had to be developed to enable on-site visits in all Higher Education institutions and undergraduate programs that comprised Brazil’s Federal System of Education (Brasil, 2004; Verhine; Dantas, 2018). Meanwhile, because institutions did not immediately receive either feedback from the Ministry of Education regarding their evaluation reports or the expected on-site visits by peer commissions, many became disenchanted with the overall process, and, as a result, early enthusiasm surrounding the SINAES initiative waned. Since the student exam was the only component of the System that occurred regularly, rather than self-evaluation, it quickly became the model’s central ingredient, distorting the System’s conceptual framework.

Finally, in 2008, CONAES and INEP adopted a series of operational measures to make the model’s full implementation viable. While it is not possible to discuss all the relevant measures here, three of them deserve special attention given the magnitude of their contribution to the consolidation of SINAES on a national level. Each is briefly addressed below.

Conceito Preliminar de Curso (CPC)

National System for the Evaluation of Higher Education’s original design was predicated on the assumption that all undergraduate programs in the Federal System, currently more than 30,000, would be visited by an external evaluation commission. The visits were expected to occur at three-year intervals, following the cycle of Exame Nacional de Desempenho dos Estudantes (ENADE, National Exam for the Assessment of Student Performance). The intention was laudable, but its materialization was not possible due to budgetary and logistical limitations. The impossibility paralyzed INEP’s work so that by 2008, four years after SINAES was implanted, the programs visited were limited to just a few in the field of Veterinary Medicine, selected to pre-test the newly developed evaluation instruments.

To solve this problem, several alternative strategies were considered. The chosen one restricted the required visits to the most deficient programs in any given field. An index denominated the Conceito Preliminar de Curso (CPC, Preliminary Course Grade) was created to identify such programs. The index was composed of quantitative indicators designed to estimate, in a preliminary fashion, the program’s quality. The indicators related to three dimensions of program quality – faculty, teaching process, and physical infrastructure – and they were selected and weighted in accordance with best-fit mathematical equations. In recent years, these indicators and their respective weights have been continually refined as new data sources become increasingly available.

The CPC uses a five-point scale and is applied so that all programs with an unsatisfactory grade (scale levels 1 and 2) are visited by an evaluation commission, with their final grades determined by the on-site visitors. Programs with a satisfactory score (levels 3, 4, and 5) may request a visit if they wish. If they do not do so, the CPC grade is reported as the final grade. Since the CPC is based on the normal curve, only about 25% of all programs are deemed unsatisfactory, which means that the total number of required commission visits is reduced to only one-fourth of the programs in the Federal System of Education. Very few programs graded as satisfactory by the CPC index do request a visit, given that the external commission may not only raise the final grade but also lower it.

Índice Geral de Cursos Avaliados da Instituição (IGC)

The Índice Geral de Cursos Avaliados da Instituição (IGC, General Institutional Course Index) was also introduced in 2008. It represents the means of the evaluation grades obtained by all the undergraduate and graduate programs at any given Higher Education institution. The average is weighted by the number of students in each program and normalized in accordance with a five-point scale. To understand the importance of the IGC, one needs to understand the limitations inherent to on-site external evaluations, especially when the resulting judgement is used for the purposes of governmental regulation. In Brazil, Higher Education regulatory measures pertaining to institutions judged unsatisfactory by the external evaluation commissions include the signing of an agreement with the Ministry of Education to make specified improvements in an established period. Failure to meet the terms of the agreement can result in punishments such as the suspension of the right to enroll new students, the mandatory replacement of the university’s dean, and the loss of institutional accreditation. Thus, the consequences of the external evaluation process are great, impacting the institution’s very survival. As a result, the IGC was created as a reference for the external evaluators, as the means of all program grades can be viewed as an indicator (albeit partial) of institutional quality.

When the grade attributed by the commission is different from that suggested by the IGC, a red flag is raised, and the commission’s evaluation report is sent to a committee of specialists for in-depth review. The committee may accept the report, ask the visiting commission for further justification, or annul the visit and require another one in its place.

The IGC, therefore, serves to substantiate the external evaluation process, reducing the likelihood of grade distortion due to the subjective bias of evaluators who, in many instances, are relatively inexperienced and find it difficult to evaluate institutional quality in a comparative perspective. Contrary to its original intention, in recent years the IGC has been used for regulatory purposes and is also one of the indicators the Ministry of Education employs to determine the amount of funding channeled to federal institutions.

National Institute for Studies and Research announced in 2016 that the IGC would be replaced by a new set of indicators. In 2018, CONAES recommended to INEP that the CPC grade be reported by dimensions rather than as a single, overall grade. However, as of 2021, neither the new indicators nor the reporting of results by dimensions has been implemented. The relationship between the SINAES evaluation and the governmental regulation of Higher Education is addressed in the final section of this article.

Exame Nacional do Ensino Médio (ENEM) vs. Exame Nacional de Desempenho dos Estudantes (ENADE)

The third important adjustment concerns the replacement, in 2011, of the ENADE originally applied to first-year undergraduate students, with the Exame Nacional do Ensino Médio (ENEM, National High School Exam).

To understand the significance of this decision, we must discuss a prior adjustment: the creation of the Indicador de Diferença entre os Desempenhos Observado e Esperado (IDD, Difference between Expected and Observed Results). Compared to the national exams applied before SINAES, a major advance regarding the ENADE approach was that, in addition to testing graduating students, it also uses the same tests to examine students in their first year of college study3. By applying both first-year and last-year tests, an effort was made to measure the institutional contribution to student learning, reducing the likelihood that differences in inter-institutional achievement be exclusively attributed to external factors, such as family and prior schooling background. At first, the comparisons between the two test results were very crude, derived from subtracting the findings for first-year students from those for last-year students. It was quickly understood that the said comparisons were inappropriate because they were predicated on the dubious assumption that the two cohorts were essentially the same, something unlikely in the context of rapidly expanding college enrollments and low rates of student completion.

Thus, the IDD was introduced in order to better capture the “value added” of Higher Education study. Calculated via multiple regression equations, the indicator compares final-year observed outcomes with those predicted when considering entry learning scores, parental education, and institutional selectivity. In this respect, using ENEM instead of ENADE to measure entry level learning makes sense, since the before-after tests do not have to be identical for one to predict the other.

Reasons for using ENEM in the place of ENADE (first-year students) to estimate the results of ENADE (last-year students) are the following. First, the substitution reduces institutional contamination, since first-year ENADE is applied to students at the end of their freshman year, whereas ENEM focuses on high school graduates and is typically taken before college entry. Second, the decision reduces the number of exams that college students are required to take – when first-year ENADE was required, most students took both exams, as ENEM is used by most Higher Education institutions for selecting new enrollees. Third, since ENEM is linked to a national student databank via social security number, it facilitates longitudinal analyses and thereby improves value-added estimates. Finally, ENEM is conceptually and technically superior to ENADE, since the former is composed of 180 competency and knowledge-based items whereas the latter is made up of only 40 questions (Zoghbi; Oliva; Moricon, 2009).

The adjustments involving the CPC, IGC, IDD, ENADE, and ENEM have been referred to as the alphabet of SINAES (Polidori, 2009). Taken together, they attest to the dynamic nature of the original model and have helped make it a viable instrument for assuring and promoting the quality of Higher Education. However, a dimension of SINAES remains highly problematical. This aspect is discussed in the following section.

The Challenge of Institutional Self-Evaluation

Although the three components of the SINAES model have been effectively implemented, improvements are still necessary. Of the challenges that must be faced, the most crucial and problematical pertains to institutional self-evaluation, which, as noted above, is the central conceptual element of the overall evaluation process. In some institutions, self-evaluation is well organized and structured, involving a significant part of the academic community and producing reports that are used by institutional leaders to adopt policies to improve academic quality. It is evident, however, that the self-evaluation successes are outnumbered by cases in which self-evaluation processes are either non-existent or extremely fragile. A study by the Ministry of Education revealed delays in 60% of the Higher Education institutions in providing the required annual self-evaluation report. Another study analyzed 172 institutional reports and concluded that five years after SINAES was implemented: (1) participation on the part of the academic community tends to be very limited; (2) there is little consistency between the evaluation results and the institution’s context; (3) most of the reports in the sample were devoid of in-depth analysis and interpretations; and (4) only 13.4% of the said reports could be judged as complete and of satisfactory quality (Brasil, 2011).

These findings are worrisome if one considers the centrality of self-evaluation processes within SINAES’ conceptual model. Self-evaluation assures that SINAES has a formative component. Generating an internal dynamic to promote educational quality, creating an evaluation culture within the institution, permitting accountability with respect to the community that the institution serves, and providing data to support decisions on institutional governance and planning are objectives of that component. In addition, within the context of SINAES, self-evaluation necessarily precedes the external evaluation visits since it offers the background and context that commissions require to complete their task.

To deal with the overall fragility of the SINAES self-evaluation processes, the coordinating bodies of SINAES (CONAES and INEP) developed instruments and promoted regional seminars to guide institutional commissions about best-practice techniques, procedures, report structure, and problem-solving. This top-down approach has proven helpful, but it is not ideal, especially since self-evaluation should not be externally imposed. On the contrary, it should be essentially bottom-up, grounded in the peculiar history, culture, and mission of the institution.

Even so, more must be done from a centralized standpoint to guarantee that effective institutional self-evaluation occurs. Local commissions need to be provided with the incentives, support, and infrastructure necessary to effectively undertake their work. These commissions must also be given autonomy with respect to the interests of institutional authorities, and they should be a part of a more general university evaluation structure to establish directives, objectives, and procedures while leaving operational aspects to qualified technical personnel. Also recommended is the implementation of self-evaluation at the program level which it encompasses most student learning activities. Thus, it would be advisable to create sub-commissions working under the institution-wide commission, to deal with micro-units, whether they are institutes, programs, or courses.

Meanwhile, CONAES and INEP must work together to promote a regular cycle of seminars and publications about self-evaluation. Their guidelines should be pedagogical in nature and require that the annual reports be accompanied by an action plan to resolve identified problems and weaknesses. Another measure worth considering would be to create committees of specialists at the national level to read, critique, and provide feedback regarding samples of annual self-evaluation reports. At the same time, CONAES and INEP must continue to ensure that all relevant information, such as that from the National Higher Education Census, the ENADE test and questionnaire, and on-site visits, is readily made available to the institutions of higher learning that comprise the Federal System of Education.

The outlined feedback should give special attention to Topic 1 of the instrument used by the external evaluators, which deals specifically with the external evaluation of the internal, self-study process, considering its evolution over time, how participatory it is in nature, its depth of analysis, the degree of its transparency, and its overall impact on institutional decision-making and quality improvement. The external evaluation of self-evaluation in the context of Higher Education is an approach utilized in many countries. In Great Britain, for example, visiting evaluation commissions conduct an Institutional Audit, whereby the structure and mechanisms adopted to assure institution quality are given primary attention. The “auditors” judge quality governance and the integrity of the accountability-based findings that are reported. The British approach is based on the premise that an institution that makes a systematized effort to assure its own quality is an institution that deserves to be positively evaluated from an external point of view (Alderman; Brown, 2005). This perspective would seemingly be appropriate for Brazil, where the number, variety, and geographic dispersion of Higher Education institutions make reliance on external evaluation processes increasingly problematic.

Other Challenges of Special Concern

In addition to the creation of structures for making institutional self-evaluation more effective, other challenges must be faced for SINAES to achieve its full potential as a nationwide system for Higher Education quality assurance. Thus, a brief discussion of some of the other challenges that must be addressed in the near future within the context of SINAES is warranted.

The globality of SINAES

The National System for the Evaluation of Higher Education is not completely national, for it does not involve state and municipal institutions. Under Brazil’s federal framework, state universities and colleges are evaluated and regulated by State Boards of Education. They can take part in SINAES if their state officially agrees to do so, but no such formal agreements presently exist. All state institutions voluntarily participate in ENADE, but none accepts the evaluation commissions organized by INEP, in part because states prefer to use commissions composed of local academics who, from their point of view, understand the context in which the institution functions. The involvement of state institutions in SINAES should be strongly encouraged as a means for ensuring that minimum quality standards are met by all Higher Education institutions in all parts of the country. It is important, in this respect, that state boards understand that participation in SINAES does not mean forsaking their regulatory prerogatives. After all, SINAES is a system for evaluation, not for regulation, and the data that it generates can be used for a variety of purposes, including state-based decisions regarding the accreditation of state colleges and universities.

The quality of indicators

The quality of the indicators used by SINAES (IGC, CPC, IDD, and others) must continue to improve so that the information they provide is increasingly reliable and complete. New data sources should be utilized, additional variables should be included in the equations, alternative variable weighting should be tested, more sophisticated statistics and analyses should be utilized, and validation processes should be more rigorous. In the case of the CPC, for example, efforts should be made to consider alternative forms of measuring pedagogical processes and physical infrastructure, to adopt criterion-referenced procedures instead of the normal curve, and to organize visits to programs on all levels of the scale (rather than just to those at the bottom) to establish comparability between CPC results and those produced by on-site evaluators.

Also, other dimensions should be investigated, such as the degree of institutional internationalization and the labor market trajectory of graduates, using data from the National Census Bureau, the Ministry of Labor, and other, yet untapped, data sources. The possibilities for indicator improvement are immense, but advances in this respect require that those involved in the management of SINAES recognize that the model is dynamic, incomplete, and part of a building process that should never be allowed to stagnate.

The quality of evaluators

One of the main challenges that SINAES faces concerns the evaluators who make up the visiting commissions. The problem is made especially acute by the regulatory impact of on-site evaluations since the grade given by the visiting commission is used to accredit or penalize institutions in the Federal System. INEP has improved its evaluator-training programs, using both face-to-face and distance learning approaches to prepare new evaluators and upgrade the formation of those already working in the system. It has also introduced processes whereby those who are evaluated can assess the evaluators’ technique, thus providing the commission members with valuable feedback and enabling INEP to identify problematic evaluators who need to either receive additional training or be removed from the evaluator database. INEP would be wise to develop a tool within its evaluator database that highlights experienced evaluators who have completed their assignments successfully and identifies those with limited experience with SINAES, but with strong potential. This would enable the creation of commissions with members with different levels of experience status, thereby ensuring that visiting commissions are led by senior evaluators and that junior evaluators can learn from their more experienced peers. This approach would allow successful evaluators to receive the deserved recognition and those new to the field to become increasingly qualified over time.

Result utilization

The broad international literature on evaluation indicates that a universal problem confronting large-scale evaluations concerns the utilization of the evaluation results (or lack thereof ). SINAES is no exception. Often, the resulting reports are never read by those for whom they are intended. Other times, reports are read, but the information is not used for decision making or policy formation, and, in some instances, it is applied inappropriately, in a negative, punitive fashion. The good use of results depends on several factors, such as the existence of an evaluation culture, the development of appropriate incentives, the provision of proper orientation, the pedagogical communication of results, and the use of effective monitoring. As mentioned, the self-evaluation reports should be linked to concrete plans for remedial action designed to resolve identified problems. Also, the reports regarding ENADE performance, sent to each participating program, should clearly indicate the relationship between the observed achievement and the competencies, abilities, and knowledge that is established in the test specification matrix. In addition, the SINAES website needs to be updated and expanded, so that valuable experiences, information, and analyses are made available to those in the wider SINAES community.

Evaluation vs. regulation

In the context of SINAES, articulation between Higher Education evaluation and regulation is necessary since, by law, the results of the evaluation effort must be used by government authorities to make regulatory decisions regarding, for example, institutional accreditation and program authorization. However, evaluation and regulation are distinct, demanding differential procedures, competencies, and perspectives (Sguissardi, 2008; Verhine, 2015; Weber, 2010).

In Brazil, regulation is based on legally binding governmental determinations designed to guarantee that society members receive goods and services of satisfactory quality. Evaluation, in its turn, seeks to provide objective and reliable information to undergird decision-making, regarding not only regulation but also many other types of decisions by a diverse array of actors, which, in many instances, are distant from the governmental realm. Thus, it is imperative not to confuse one process with the other to avoid unfavorable distortions. The evaluation must be allowed to proceed free of the pressures and vested interests that surround governmental regulation so as to protect the evaluation’s integrity and preserve its value in assuring and promoting the quality of Higher Education in the country.

Implementation of a multidimensional evaluation model

The cited challenges involve making relatively minor alterations in the SINAES framework. Before closing the article, it is useful to discuss the possibility of a much more significant change: the implementation of a multidimensional evaluation model.

As previously mentioned, the 2004 SINAES Law indicates that institutional evaluations should focus on ten distinct dimensions. Thus, as conceptualized, the SINAES model is multidimensional in terms of its internal structure. The results, however, are reported in a unitary fashion, following a five-point scale.

A growing body of literature contends that, instead of relying on a unitary score, evaluations in education should denote specific scores for each of the evaluated dimensions (Bae, 2018; Goe, 2010; Rothman, 2015). In this model, a so-called “data dashboard” is provided, whereby users can identify both the strengths and weaknesses of a given institution or program. Also, since the dimensions are not weighted a priori, the user is free to value each dimension according to one’s perspective.

Since 2012, a multi-dimensional model for Higher Education, created under the auspices of the European Commission, has been utilized by many institutions in various parts of the world. The model, known as the U-Multirank, addresses five dimensions – Teaching and Learning, Research, Knowledge Transfer, International Orientation, and Regional Engagement – via 35 indicators and presents its results by dimension and by dimension indicator, using a colorful graphic display (Vught; Ziegele, 2012). Unlike SINAES, institutional and program participation is voluntary, and no country uses the U-Multirank framework for regulatory purposes. Using multiple results for decision-making, such as those pertaining to regulation and/or financing, is complicated (Brasil, 2019).

The SINAES Law expressly permits that the evaluation findings be presented by each dimension rather than by the combination of the dimensions. Until now, only the combination approach, with a single grade for each institution and program, has been utilized. However, adopting the multidimensional model, along with making the other changes and improvements suggested in this article, deserves serious consideration as SINAES approaches its third decade of existence.

3In 2020, ENADE was cancelled due to the pandemic. It has been reinstated in 2021.

Como citar este artigo/How to cite this article

Verhine, R. E.; Dantas, L. M. V. Brazil’s National System for the evaluation of Higher Education: context, challenges, and perspectives. Revista de Educação PUC-Campinas, v. 26, e215312, 2021. https://doi.org/10.24220/2318-0870v26e2021a5312

Referências

Alderman, G.; Brown, R. Can quality assurance survive the market: accreditation and audit at the crossroads, Higher Education Quarterly, v. 59, n. 4, p. 313-328, 2005. [ Links ]

Bae, S. redesigning systems of school accountability: a multiple measures approach to accountability and support. Education Policy Analysis Archives, v. 26, n. 8, p. 1-28, 2018. [ Links ]

Billing, D. International comparisons and trends in external quality assurance of higher education. Higher Education, v. 47, p. 113-37, 2004. [ Links ]

Brasil. Lei nº 10.861, de 14 de abril de 2004. Institui o Sistema Nacional de Avaliação da Educação Superior – SINAES e dá outras providências. Diário Oficial da União, Brasília, 2004. [ Links ]

Brasil. Ministério de Educação. Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. Avaliação multidimensional de programas de pós-graduação. Relatório Técnico DAV. Brasília: CAPES, 2019. [ Links ]

Brasil. Ministério de Educação. Da concepção à regulamentação. 2. ed. Brasília: INEP/MEC, 2004. [ Links ]

Brasil. Ministério de Educação. Sistema Nacional de Avaliação da Educação Superior: análise dos relatórios de autoavaliação das instituições de educação superior. v. 3. Brasília: INEP/MEC, 2011. [ Links ]

Brennan, J. Authority, legitimacy and change: the rise of quality assurance in higher education. Higher Education Management, v. 9, n. 1, p. 7-30, 1997. [ Links ]

Brennan, J.; Shah, T. Managing quality in higher education: an international perspective on institutional assessment and change. Buckingham: OECD, 2001. [ Links ]

Dias Sobrinho, J. Avaliação: políticas educacionais e reformas da educação superior. São Paulo: Cortez, 2003. [ Links ]

European Commission. Initiative of quality assurance and assessment of higher education in Europe. Luxembourg: Office of the Official Publications of the European Commission, 1995. [ Links ]

Goe, L. Evaluating teaching with multiple measures. Washington: American Federation of Teachers, 2010 [ Links ]

Kerr, C. The uses of the university. Cambridge: Harvard University Press, 1982. [ Links ]

King, R. K. Governance and accountability in the regulatory state. Higher Education, v. 53, p. 411-430, 2007. [ Links ]

Lim, D. Quality assurance in Higher Education: a study in developing countries. Oxfordshire: Routledge, 2017. [ Links ]

Polidori, M. M. Políticas de avaliação da educação superior brasileira: Provão, SINAES, IDD, CPC, IGC e outros índices. Avaliação: Revista da Avaliação da Educação Superior, v. 14, n. 2, p. 267-290, jul. 2009. [ Links ]

Rhoades, G.; Sporn, B. Quality assurance in Europe and the U.S.: professional and political economic framing of higher education policy. Higher Education, v. 43, p. 355-90, 2002. [ Links ]

Rothman, R. Data dashboards: accounting for what matters. Washington: Alliance for Excellent Education, 2015. [ Links ]

Sguissardi, V. Regulação estatal versus cultura de avaliação institucional? Revista da Avaliação da Educação Superior, v. 13, n. 3, p. 857-862, 2008. [ Links ]

Thune, C. The European systems of quality assurance: dimensions, harmonization and differentiation. Higher Education Management, v. 10, n. 3, p. 9-26, 1998. [ Links ]

van Vught, C.; Westerheijden, D. F. Quality management and quality assurance in European higher education: methods and mechanisms. Luxembourg: Office of the Official Publications of the European Commission, 1993. [ Links ]

Verhine, R. E. Avaliação da CAPES: subsídios para a reformulação do modelo. In: Machado, D.; Junior Silva, J. R.; Oliveira, J. F. (org.). Reformas e políticas: educação superior e pós-graduação no Brasil. Campinas: Alínea, 2008. p. 165-188. [ Links ]

Verhine, R. E. Avaliação e regulação da educação superior: uma análise a partir dos primeiros 10 anos do SINAES. Revista da Avaliação da Educação Superior, v. 20, n. 3, p. 603-619, 2015. [ Links ]

Verhine, R. E.; Dantas, L. V. A avaliação do desempenho de alunos de educação superior: uma análise a partir da experiência do ENADE. In: Lordêlo, J. A.; Dazzani, M. V. Avaliação educacional: desatando e reatando nós. Salvador: Universidade Federal da Bahia, 2009. p. 347-405. [ Links ]

Verhine, R. E.; Dantas, L. V. Brazil: problematics of the tripartite federal framework. In: Carnoy, M. et al. Higher education in federal countries: a comparative study. New York: Sage, 2018. p. 212-257. [ Links ]

Verhine, R. E.; Dantas, L. V.; Soares, J. F. Do Provão ao ENADE: uma análise comparativa dos exames nacionais utilizados no ensino superior brasileiro. Avaliação e Políticas Públicas em Educação, v. 14, n. 52, p. 291-310, 2006. [ Links ]

Verhine, R. E.; Freitas, A. M. A Avaliação da educação superior: modalidades e tendências no cenário internacional. Ensino Superior Unicamp, v. 3, p. 16-39, 2012. [ Links ]

Vught, F. A.; Ziegele, F. (ed.). Multidimensional Rankings: the design and development of U-Multirank. Dordrecht: Springer, 2012 [ Links ]

Weber, S. Avaliação e regulação da educação superior: conquistas e impasses, Educação & Sociedade, v. 31, n. 113, p. 1247-1269, 2010. [ Links ]

Zoghbi, A. C. P.; Oliva, B. T; Moricon, M. Aumentando a eficácia e a eficiência da avaliação do Ensino Superior: uma análise do uso do ENEM como alternativa ao ENADE para ingressantes. In: V Reunião da ABAVE, 2009, Salvador. Anais [...]. Salvador: Associação Brasileira de Avaliação Educacional, 2009. p 1-18. [ Links ]

Received: March 19, 2021; Revised: August 24, 2021; Accepted: September 03, 2021

Correspondência para/Correspondence to: R.E. VERHINE. E-mail: rverhine@gmail.com.

Colaboradores

R. E. VERHINE was responsible for the study conception and design, analyses and interpretation of the data, revision and approval of the final version of the article. L. M. V. DANTAS was responsible for the analyses and interpretation of the data, revision and approval of the final version of the article.

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.