SciELO - Scientific Electronic Library Online

 
vol.46 número3Preceptoria na atenção primária durante as primeiras séries de um curso de MedicinaIdentidade médica: o impacto do primeiro contato com pacientes na empatia do estudante de Medicina índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Compartilhar


Revista Brasileira de Educação Médica

versão impressa ISSN 0100-5502versão On-line ISSN 1981-5271

Rev. Bras. Educ. Med. vol.46 no.3 Rio de Janeiro  2022  Epub 04-Ago-2022

https://doi.org/10.1590/1981-5271v46.3-20220081 

ORIGINAL ARTICLE

Brazilian Version of the ACE (Assessing Competencies in Evidence-Based Medicine) Tool: a Validation Study

Ferdinand Gilbert Saraiva da Silva Maia1 
http://orcid.org/0000-0002-4225-0521

Ana Karenina Carvalho de Souza1 
http://orcid.org/0000-0001-6356-7328

Breno Carvalho Cirne de Simas1 
http://orcid.org/0000-0001-8683-6631

Isadora Soares Lopes1 
http://orcid.org/0000-0003-1097-3386

Maria Paula Ribeiro Dantas Bezerra1 
http://orcid.org/0000-0002-2664-5742

Rosiane Viana Zuza Diniz1 
http://orcid.org/0000-0002-3883-2780

1Universidade Federal do Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil.


Abstract:

Introduction:

The ACE (Assessing Competencies in Evidence-Based Medicine) Tool is a recently developed questionnaire to assess competencies in Evidence-Based Medicine. The aim of this study is to validate the Brazilian version of ACE Tool.

Methods:

This is a cross-sectional validation study carried out in two phases. In the first phase, the questionnaire was translated. In the second phase, the questionnaire was applied to undergraduate students and teachers/preceptors of the medical course. The evaluated properties were internal validity, consistency and reliability.

Results:

76 medical undergraduate students and 12 teachers/preceptors were included. The mean of teachers/preceptors was significantly higher than that of students (10.25±1.71 vs 8.73±1.80, mean difference of 1.52, 95%CI 0.47-2.57, p=0.005), demonstrating construct validity. The Brazilian version of the ACE Tool obtained adequate internal consistency (Cronbach’s alpha = 0.61) and reliability (item-total correlation ≥ 0.15 in 14 of the 15 items).

Conclusion:

The Brazilian version of the ACE Tool shows acceptable psychometric properties and can be used as an instrument to assess competencies for Evidence-Based Medicine in Brazilian medical students.

Key words: Medical Education; Validation Study; Evidence-Based Medicine

Resumo:

Introdução:

A ferramenta Assessing Competencies in Evidence-Based Medicine (ACE) é um questionário recentemente proposto para avaliação de competências em Medicina Baseada em Evidências. Este estudo teve como objetivo validar a versão brasileira da ferramenta ACE.

Método:

Trata-se de um estudo transversal de validação realizada em duas fases. Na primeira fase, traduziu-se o questionário. Na segunda fase, estudantes de graduação e professores/preceptores do curso de Medicina responderam ao questionário. As propriedades avaliadas foram validade, consistência e confiabilidade internas.

Resultado:

Incluíram-se 76 estudantes de graduação e 12 professores/preceptores. A média dos professores/preceptores foi significativamente mais alta que a dos alunos (10,25 ± 1,71 versus 8,73 ± 1,80, diferença média de 1,52, IC95% 0,47-2,57, p = 0,005), demonstrando a validade de construto. A versão brasileira da ferramenta ACE obteve consistência (alfa de Cronbach = 0,61) e confiabilidade internas (correlação item-total ≥ 0,15 em 14 dos 15 itens) adequadas.

Conclusão:

A versão brasileira da ferramenta ACE demonstra propriedades psicométricas aceitáveis e pode ser usada como instrumento para a avaliação de competências para a Medicina Baseada em Evidências em estudantes de Medicina brasileiros.

Palavras-chave: Educação Médica; Estudo de Validação; Medicina Baseada em Evidências

INTRODUCTION

In 1991, in an editorial in the ACP Journal Club, Gordon Guyatt used the term “Evidence-Based Medicine” (EBM) for the first time in the medical literature to describe a new way of thinking and practicing medicine, privileging skills of literature research, critical evaluation of scientific articles and synthesis of information for individualized clinical decision-making, to the detriment of the appeal to the authority of more experienced professionals and textbooks1.

David Sackett, one of the pioneers of clinical epidemiology, defined EBM as “the conscious, explicit and judicious use of the best evidence for decision-making in the care of individual patients”. Therefore, the practice of EBM incorporates the best scientific evidence, the experience and expertise of the professional and the particularities, including values and preferences of the patient, for a better choice2.

The practice of MBE, and therefore its teaching and assessment, must comprise 5 steps (or domains), as summarized by the Sicily Statement: ask, search, appraise, integrate and evaluate, as shown in Chart 1 3.

Chart 1 Steps for evidence-based practice 

Ask Understand the clinical setting and develop a structured question that can be answered.
Search Build an appropriate search syntax, with descriptors and Boolean operators and identify the appropriate databases.
Appraise Critically assess the methodology and results of an article, regarding its internal and external validity.
Integrate Integrate the results of critically evaluated research into the care of a specific patient.
Evaluate Evaluate changes in one’s current medical practice and identify opportunities for improvement.

Source: adapted from Daes et al.3.

The ACE (Assessing Competencies in Evidence-Based Medicine) Tool, is a questionnaire to assess competencies for EBM, proposed and validated by Ilic et al., in which the respondents are presented with a clinical scenario, a clinical question, a search strategy, and a hypothetical article summary. Then, 15 closed questions are presented, which must be answered with a “yes” or “no”, covering four of the five steps for evidence-based practice: the construction of the clinical question (questions 1 and 2); the search of scientific literature in databases (questions 3 and 4); critical analysis of the evidence found (questions 5 to 11); and the application of evidence to the specific clinical setting (questions 12 to 15)4.

The aim of this study is to validate the Brazilian version of the ACE Tool.

METHODS

Design, participants and ethics

This is a cross-sectional validation study. Medical students from Universidade Federal do Rio Grande do Norte (UFRN) enrolled in a complementary or extension course on Evidence-Based Medicine were invited to answer the questionnaire after the first class. Teachers and preceptors of the medical course, recognized by the researchers as familiar with the topic, were also invited, aiming to assess the discriminatory capacity of the questionnaire. The research protocol was reviewed and approved by the Research Ethics Committee of Hospital Universitário Onofre Lopes (Huol - UFRN) with CAAE n. 30445120.0.0000.5292 and Opinion n. 4,074,739.

Translation and adaptation of the assessment questionnaire

The initial translation of the questionnaire was carried out independently by two researchers with experience in the subject and fluency in English, after which a single version was established. This consensus version in Portuguese was back-translated into English by a professional translator, who did not participate in the previous phases. The back-translated version was then compared to the original version of the questionnaire in English and new adjustments were made, until a final version was attained by consensus between the researchers and the translator. The final questionnaire items are shown in Chart 2, while the full translated and adapted version of the ACE Tool in Portuguese is shown in the supplementary material.

Chart 2 Questionnaire Items - translated version of the ACE Tool 

Fazendo uma pergunta passível de resposta Sim Não
1. Todos os elementos PICO estão descritos no cenário do paciente?
2. A questão construída após o cenário produz uma pergunta objetiva e direcionada?
Buscando na literatura
3. A estratégia de busca (a ser utilizada no Medline) encontrará estudos relevantes relacionados à pergunta?
4. A estratégia de pesquisa utiliza descritores em saúde (MeSH/DeCS), palavras-chave e operadores booleanos de forma co/rreta e efetiva?
Avaliando a evidência
5. Há informações suficientes para determinar a representatividade dos pacientes do estudo?
6. O método de alocação dos participantes para a intervenção/exposição e a comparação foi adequado?
7. Alguma forma de ajuste foi necessária?
8. Todos os participantes estavam cegos para o tratamento/exposição?
9. Todos os pesquisadores estavam cegos para o tratamento/exposição?
10. Todos os avaliadores dos desfechos estavam cegos para o tratamento/exposição?
11. Todos os pacientes foram analisados nos grupos para os quais foram randomizados?
Aplicando a evidência
12. O paciente do cenário compartilha características/circunstâncias semelhantes às dos participantes no estudo?
13. O tratamento/terapia é factível no contexto do cenário clínico proposto?
14. Todos os desfechos relevantes foram considerados?
15. Os benefícios do tratamento/terapia superam os potenciais danos e custos?

Source: translated and adapted from Ilic et al.4.

Application of the Questionnaire

The questionnaire was applied through an online platform to be answered in a single attempt with no time limit. All participants provided the free and informed consent to participate.

Statistical analysis

The following variables were collected: group (students and teachers/preceptors); semester attended by the student; responses to each item of the ACE questionnaire; total number of correct answers. A sample size of 75 students was estimated (5 participants per item of the questionnaire). The difficulty of the questionnaire items, internal consistency and reliability were evaluated. The difficulty of the items was evaluated by the percentage of candidates who answered the question correctly. The internal consistency of the questionnaire was assessed using Cronbach’s alpha. A Cronbach’s alpha between 0.6-0.7 was considered an acceptable internal consistency; between 0.7-0.9, as good internal consistency; and above 0.9, as excellent internal consistency. Reliability was assessed by the item-total correlation. An item-total correlation (CIT) 0.15 was considered acceptable5. The students’ results were compared with those of teachers/preceptors’ results using Student’s t test for independent samples. P values <0.05 were considered statistically significant.

RESULTS

Eighty-eight responses were obtained, 76 of which comprised undergraduate medical students (from the first to the tenth semesters of the course) and 12 teachers/preceptors of the medical course.

Figure 1 below shows the distribution of the number of correct answers in the students’ assessment according to the course semester.

Source: elaborated by the authors.

Figure 1 Box and whisker plot (median and interquartile ranges) of the number of correct answers in the students’ assessment according to the course semester 

Difficulty, reliability and internal consistency of the translated version of the ACE tool

Table 1 shows the analysis of individual items.

The Cronbach’s alpha value was 0.61.

Table 1 Analysis of Individual Items: distribution of items according to step, difficulty index and item-total correlation 

Item Step Difficulty index Item-total correlation
1 Clinical question 63% 0.28
2 Clinical question 28.4% 0.15
3 Research in literature 77.3% 0.36
4 Research in literature 62.5% 0.18
5 Critical analysis 67% 0.36
6 Critical analysis 15.9% 0.28
7 Critical analysis 51.1% -0.03
8 Critical analysis 97.7% 0.23
9 Critical analysis 80.7% 0.33
10 Critical analysis 67% 0.20
11 Critical analysis 30.7% 0.16
12 Integration 86.4% 0.21
13 Integration 69.3% 0.29
14 Integration 55.7% 0.36
15 Integration 44.3% 0.27

Source: elaborated by the authors.

Construct validity

The averages obtained by the students were compared with the averages obtained by the teachers/preceptors, aiming to observe the questionnaire’s ability to discriminate different degrees of expertise. The mean number of correct answers by teachers/preceptors was significantly higher than the mean number of correct answers by the 76 students (10.25±1.71 vs. 8.73±1.80, mean difference of 1.52, 95%CI 0.47-2.57, p=0.005).

Summary of the properties of the translated version of the ACE tool

Chart 3 summarizes the properties of the translated version of the ACE Tool.

Chart 3 Properties of the translated version of the ACE Tool 

Property Test Used Acceptable Results Translated version performance
Content Validity Expert’s opinion Assesses steps 1-4 of evidence-based practice Acceptable
Index of Item Difficulty Percentage of correct answers Wide range allows implementation in different groups of participants Ranged from 15.9% to 99.7%
Internal Consistency Cronbach's alpha A Cronbach’s alpha of 0.6-0.7 is considered acceptable; 0.7-0.9 is considered good and >0.9 is considered excellent Cronbach’s alpha of 0.61
Internal Reliability Item-Total Correlation (CIT) CIT ≥ 0.15 is considered acceptable ≥ 0.15 for all items except item 7 (-0.03)
Construct validity Comparison between means of groups with different levels of knowledge Significant difference between teachers/preceptors and students On a 15-point scale, the students’ mean was 8.73 and the teachers' mean was 10.25 (p=0.005)

Source: elaborated by the authors.

DISCUSSION

Our results demonstrate that the translated version of the ACE Tool maintains the discriminatory capacity for different levels of expertise and acceptable internal reliability and consistency, according to the original version. The undergraduate medical students in our study obtained a mean number of correct answers of 8.73 and the teachers, 10.25, comparable to 8.6 for the “beginner” level and 10.4 for the “advanced” level of the original study. The reliability and consistency indices in our study also maintained the results of the previous study4. It is necessary to draw attention to the relatively low value of internal consistency (Cronbach’s alpha between 0.6 and 0.7), both in our research and in the original study by Ilic, which was only “acceptable”. The very purpose of the questionnaire, addressing competencies in different domains (construction of the clinical question, literature search, critical evaluation and integration into the clinical scenario), contributes to a lower relationship between the variables and, therefore, a lower numerical value of Cronbach’s alpha. On the other hand, each of the 15 items has its own relevance, as it addresses a specific competence, such as identifying the adequacy of randomization, blinding, intention to treat, etc., so that the answer to each question has an important meaning, even when it departs from the answer to other questions.

The ACE Tool is one of several standardized questionnaires used to assess competencies in EBM, such as the Berlin questionnaire6 and the Fresno Test7, with the latter also having been validated into Brazilian Portuguese8. The Berlin questionnaire only addresses critical evaluation. The Fresno Test, in turn, assesses 3 domains (“ask”, “research” and “critically appraise”) through open-ended questions, but requires a long time to answer (approximately one hour). The ACE Tool allows a broad assessment (“ask”, “research”, “critically appraise” and “integrate”), stimulating clinical reasoning, high practicality and a short response time. In fact, the ACE Tool has been used internationally to assess students9),(10) and educational strategies11)-(14, although Buljan et al. have observed a lower “sensitivity to change”, that is, a lower capacity of the ACE Tool to discriminate the knowledge obtained after courses, limiting its use as a “post-test evaluation” in relation to the Berlin questionnaire and the Fresno test15. It is also important to note that the ACE Tool is specifically targeted at a therapeutic issue, and important focal points of clinical activity, such as diagnosis and prognosis, are not included. Standardized questionnaires for diagnostic reasoning and evidence-based prognosis constitute a gap in the literature.

The National Curriculum Guidelines for the Undergraduate Course in Medicine recognize the need for decision-making based on critical and contextualized analysis of scientific evidence and effectively point out as a “key action” the promotion of scientific and critical thinking and support for the production of new knowledge16. The ACE Tool addresses four of the five steps of evidence-based practice and allows discriminating specific knowledge and skills. In this way, it is an important tool to understand the participants’ prior knowledge, to plan and/or adapt the curriculum, as well as to understand specific educational needs.

CONCLUSIONS

The Brazilian version of the ACE Tool shows acceptable psychometric properties similar to the original version and can be used as an instrument to assess competencies for Evidence-Based Medicine in Brazilian medical students.

ACKNOWLEDGMENT

To the Professional Master’s Degree in Health Education (MPES) and to the Tutorial Educational Program (PET) of Universidade Federal do Rio Grande do Norte for support in the study conduction.

REFERENCES

1. Guyatt GH. Evidence-Based Medicine. ACP J Club. 1991;114:A16. [ Links ]

2. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312:71-2. [ Links ]

3. Daes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, et al. Sicily Statement on evidence-based practice. BMC Med Educ. 2005;5:1. [ Links ]

4. Ilic D, Nordin RB, Glasziou P, Tilson JK, Villanueva E. Development and validation of the ACE tool: Assessing Medical Medical Trainees’ Competency in Evidence Based Medicine. BMC Med Educ . 2014;14:114. [ Links ]

5. Kline P. The handbook of psychological testing. 2nd edition. London: Routledge; 2000. [ Links ]

6. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer H-H, Kunz R. Do short courses in Evidence Based Medicine improve knowledge and skills? Validation of Berlin Questionnaire and before and after study of courses in Evidence Based Medicine. BMJ . 2002;325;1338-41. [ Links ]

7. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno Test of Competence in Evidence Based Medicine. BMJ . 2003;326:319-21. [ Links ]

8. Salermo MR, Herrmann F, Debon LM, Soldatelli MD, Forte GC, Bastos MD, et al. Brazilian version of the Fresno Test of Competence in Evidence-Based Medicine: a validation study. Sci Med. 2019;29(1):e32295. [ Links ]

9. Clode NJ, Danielson K, Dennett E. Perceptions of competency with Evidence-Based Medicine among medical students: changes through training and alignment with objective measures. N Z Med J. 2021;134(1531):63-75. [ Links ]

10. Mahmoud MA, Laws S, Kamal A, Al Mohanadi D, Al Mohammed A, Mahfoud ZR. Examining aptitude and barriers to Evidence-Based Medicine among trainees at an ACGME-I Accredited Program. BMC Med Educ . 2020;20:414. [ Links ]

11. Ilic D, Nordin RB, Glasziou P, Tilson JK, Villanueva E. A randomised controlled trial of a blended learning education intervention for teaching Evidence-Based Medicine. BMC Med Educ . 2015;15:39. [ Links ]

12. Yoon SH, Kim M, Tarver C, Loo LK. “ACEing” the evidence within Physical Medicine and Rehabilitation (PM&R). MedEdPortal. 2020;16:11051. [ Links ]

13. Goodarzi H, Teymourzadeh E, Rahimi S, Nasiri T. Efficacy of active and passive Evidence-Based Practice training for postgraduate medical residents: a non-randomized controlled trial. BMC Res Notes. 2021;14(1):317. [ Links ]

14. Kumaravel B, Stewart C, Ilic D. Face-to-face versus online clinical integrated EBM teaching in an undergraduate medical school: a pilot study. BMJ Evid Based Med. 2022;27:162-8. [ Links ]

15. Buljan I, Jeroncic A, Malicki M, Marusic M, Marusic A. How to choose an evidence-based medicine knowledge test for medical students? Comparison of three knowledge measures. BMC Med Educ . 2018;18:290. [ Links ]

16. Brasil. Institui Diretrizes Curriculares Nacionais do Curso de Graduação em Medicina e dá outras providências. Resolução nº 3, de 20 de junho de 2014. Brasília: Ministério da Educação; 2014. [ Links ]

7Evaluated by double blind review process.

SOURCES OF FUNDING The authors declare no sources of funding.

Received: May 27, 2022; Accepted: May 31, 2022

Chief Editor: Daniela Chiesa. Associate editor: Not assigned.

AUTHORS’ CONTRIBUTION

Ferdinand Gilbert Saraiva da Silva Maia and Rosiane Viana Zuza Diniz participated in the study design, data analysis/interpretation and manuscript writing. Ferdinand Gilbert Saraiva da Silva Maia, Ana Karenina Cordeiro de Souza, Breno Carvalho Cirne Simas, Isadora Soares Lopes and Maria Paula Ribeiro Dantas Bezerra participated in data collection and analysis/interpretation.

CONFLICTS OF INTEREST

The authors declare no conflicts of interest.

Creative Commons License Este é um artigo publicado em acesso aberto sob uma licença Creative Commons