SciELO - Scientific Electronic Library Online

 
vol.35SOCIAL SKILLS AND EMPATHY IN BASIC EDUCATION TEACHERS author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Share


Estudos em Avaliação Educacional

Print version ISSN 0103-6831On-line version ISSN 1984-932X

Est. Aval. Educ. vol.35  São Paulo  2024  Epub May 07, 2024

https://doi.org/10.18222/eae.v35.10803_port 

ARTICLES

USING RUBRICS IN BASIC EDUCATION: A REVIEW AND RECOMMENDATIONS

IDuquesne University, Pittsburgh-PA, United States of America;


ABSTRACT

This narrative review of 17 studies of the use of rubrics and other criteria-referenced tools in basic education had two purposes. The first was to review studies of only the K-12 level, because previous reviews were heavily weighted toward studies conducted in higher education. The second was to use these studies as the basis for recommendations that may be useful to classroom teachers and to the teacher education faculty and school administrators who work with them. Three recommendations for using rubrics with K-12 students resulted from the present review: (1) ensure that rubrics are of high quality; (2) plan activities that actively engage students with the rubrics; (3) use rubrics to connect formative assessment with grading.

KEYWORDS RUBRICS; ASSESSMENT; ACADEMIC SUCCESS; MOTIVATION FOR LEARNING.

RESUMO

Esta revisão crítica de 17 estudos sobre o uso de rubricas e outros instrumentos referenciados por critérios na educação básica teve dois propósitos. O primeiro foi analisar estudos referentes somente ao ensino fundamental e médio, porque as análises anteriores eram muito voltadas para estudos relativos ao ensino superior. O segundo foi usar esses estudos como base para recomendações que possam ser úteis para os professores em sala de aula e para os formadores de professores e gestores escolares que trabalham com eles. Três recomendações para o uso de rubricas com alunos do ensino fundamental e médio resultaram da presente revisão: (1) garantir que as rubricas sejam de alta qualidade; (2) planejar atividades que envolvam ativamente os alunos com as rubricas; (3) usar rubricas para vincular a avaliação formativa com a atribuição de notas.

PALAVRAS-CHAVE RUBRICAS; AVALIAÇÃO; SUCESSO ACADÊMICO; MOTIVAÇÃO PARA O APRENDIZADO.

RESUMEN

Esta revisión narrativa de 17 estudios sobre el uso de rúbricas y otras herramientas relacionadas con criterios en la educación básica tuvo dos propósitos. El primero de ellos fue el de analizar estudios relativos tan solo a la educación básica y media, porque los análisis anteriores tenían mucho que ver con los estudios vinculados a la educación superior. El segundo fue el de utilizar tales estudios como base para recomendaciones que puedan ser útiles a los docentes en clase y a los responsables por la formación de profesores y administradores escolares que con ellos trabajan. De la presente revisión resultaron tres recomendaciones para el uso de rúbricas con alumnos de educación básica y media: 1) asegurar que dichas rúbricas sean de alta calidad; 2) planear actividades que involucren activamente alumnos y rúbricas; 3) utilizar rúbricas para vincular la evaluación formativa y la clasificación.

PALABRAS CLAVE RÚBRICAS; EVALUACIÓN; ÉXITO ACADÉMICO; MOTIVACIÓN PARA EL APRENDIZAJE.

INTRODUCTION

Rubrics are widely used in basic and higher education as tools, both formative, to support feedback by teacher, peers, or students themselves; and summative, to validate grading decisions. True rubrics have two elements (Andrade, 2000; Brookhart, 2013): criteria, or the qualities one should look for in student work; and descriptions of levels of performance across a continuum of quality. Related tools such as checklists and rating scales, present criteria but lack descriptions of levels of performance.

Rubrics may be classified according to how they organize the criteria and the performance level descriptions. Analytical rubrics consider each criterion separately and present a scale with performance level descriptions for each criterion. Analytical rubrics are often displayed as a matrix with a row for each criterion and its performance level descriptions. Analytical rubrics are especially effective for formative uses because they provide students with more detailed feedback; the rubrics reflect which criteria represent strengths of their work and which criteria represent areas for improvement (Brookhart, 2013). Holistic rubrics, in comparison, present only one descriptive scale that considers all the criteria simultaneously.

In general, reviews of studies of rubrics have found them to have a positive effect on student learning, performance, and the self-regulation of learning (Brookhart & Chen, 2015; Jonsson & Svingby, 2007; Panadero & Jonsson, 2013; Panadero et al., 2023; Reddy & Andrade, 2010). These reviews of previous studies of rubrics have included studies conducted in both basic and higher education, with studies in higher education predominating, except for Reddy and Andrade (2010), who included only studies of rubrics in higher education.

The current review has two purposes. The first is to provide a narrative review of studies of rubrics in basic education (K-12 or equivalent) and to provide a needed focus on rubric use in basic classroom teaching. The second is to distill recommendations for using rubrics in K-12 education and to inform teachers in basic education as well as teacher education faculty who work with preservice and in-service basic education teachers.

METHOD

A search was conducted in the Education Resources Information Center (Eric)1 using the terms “scoring rubrics” AND “student learning”. While much of the effectiveness of rubrics has been shown to derive from their use for formative assessment and feedback, not scoring (Panadero & Jonsson, 2013), the keyword structure of the Eric database required using the term “scoring rubrics” to return articles relevant to this review. This search yielded 574 results.

The search was further restricted to articles published from 2014 to 2023, that is, in the last ten years and since this author’s previous review (Brookhart & Chen, 2015). This narrowed the pool of articles to 308. Abstracts of these 308 articles were screened to remove (a) studies that were conducted in higher education or pre-K; (b) studies that did not investigate use of rubrics per se, but rather used rubrics to measure the outcomes in studies of teacher evaluation, program or curriculum design, or instructional strategies; and, (c) expository or theoretical pieces that discussed rubrics but were not empirical studies. The result was a list of 12 studies of rubric use in K-12 education, published between 2014 and 2023. Six studies were added after hand-searching the reference sections of the relevant articles and contacting colleagues who do research on rubrics, yielding a total of 18 studies. These studies were read in their entirety, and it was found that two of the studies had been published from the same data set. The duplication was eliminated, leaving 17 independent studies to describe in this review. Table 1 presents selected basic information about the studies.

TABLE 1 Studies of rubric use in basic education 

Study Country Level Design Sample size Type of rubric studied Intervention/use Outcome studied
(learning/performance, affective/motivational, or both)
Key findings
Auxtero and Callaman (2021) Philippines 11th grade Quasi-experimental 96 students Analytical, critical thinking & problem-solving rubric, 5 criteria, 4 levels Using a rubric for learning application of derivatives in basic calculus
(vs. not)
Learning/performance Both groups (rubric vs. non-rubric) improved pre-post, but the experimental group improved more. Authors interpreted this as rubrics helping the learner to understand the expectations and components of a particular task or problem.
Bradford
et al. (2016)
United States 1st and 2nd grade Quasi-experimental 20 students Analytical writing rubric (similar to 6+1 Trait), and a student-friendly version of it Mini-lessons on writing (by trait) vs. mini-lessons plus using a rubric, with instruction on how to use the rubric Learning/performance

Attitudes
Writing quality was same for both groups at pre and post, differed in middle when only one group had rubrics (counterbalanced… Group B got rubrics first, then Group A). Group A and B both continued to improve to the posttest. No group differences in attitude survey results. Students wrote that rubrics help them remember to do all six traits and check their work. Teachers said rubrics helped them plan the content of their mini-lessons.
Chen and Andrade (2018) United States 5th grade Experimental 220 matched pairs
(312 students)
Different theater teachers used different criteria-referenced tools Criteria-referenced formative assessment (FA)
meant teacher explicitly reported using:
• rubrics, checklists, or other criteria-based tools,
• teacher, peer or self-assessment, &
• opportunity for revision
Learning/performance For performance tasks, a small effect size favored the criteria group (d = .25). No significant difference between groups for multiple choice or constructed response test items.
Chen et al. (2017) United States K-12 Experimental 611 matched pairs
(1,222 students)
Different arts (visual arts, theater, music, dance) teachers used different criteria-referenced tools Criteria-referenced FA meant teacher explicitly reported using:
• rubrics, checklists, or other criteria-based tools,
• teacher, peer or self-assessment, &
• opportunity for revision
Learning/performance Average effect overall (all arts disciplines, but no high school dance or music, or middle school music) treatment effect on the treated (ATT) was d = .26. Small effect size favoring the treatment group.
Gallego-Arrufat and Dandis (2014) Spain 3rd year of secondary education, age 14-15 Case study 15 students Analytical mathematics problem-solving rubric, 3 criteria Using a rubric for selfand peerassessment and teacher feedback Attitudes and perceptions of using rubrics for writing mathematical explanations
[Teacher use]
Rubric showed students what was expected and allowed the teacher to check student understanding daily. Some students complained and were worried about doing mathematical explanations, which they were not used to.
Hsia et al. (2016) Taiwan Junior high Quasi-experimental 163 students Analytical drama rubric, 10 criteria, 4 levels Theater (drama, develop and perform scripts). Both groups had rubric, one group used it for peerassessment Learning/performance

Satisfaction, motivation, & self-efficacy for peer-assessment
Both groups were told the rubric would be used for grading as well as selfor peer-assessment. Peer-assessment group performed better. Also, peer-assessment group was more satisfied with the performance arts activity. Peers’ learning outcomes were highly correlated with their intrinsic motivation. Peer-assessment group gained self-efficacy (correlational evidence) by both evaluating peers’ work and reacting based on peer comments on their own work.
Hubber
et. al (2022)
Australia Year 8 Case study 2 teachers,
24 students
Analytical interdisciplinary rubric, 8 content criteria (5 science, 3 math) and 2 learning skills criteria, 4 levels) Tested the interdisciplinary science/math lessons, but with special emphasis on student self-assessment with a rubric Learning/performance Study highlighted the importance of selecting the right criteria and degrees of progression. Also, the importance of structuring the task so students had to employ knowledge and skills in both disciplines. Only a few students performed at a high level. Discussion called for more teacher support during the work, and more focus on design and technology skills needed for the task.
Idris et al. (2017) Brunei Year 10 Action research 25 students Analytical history rubrics 9 lessons using rubrics [Teacher use] Rubrics were used for: (1) clarifying teachers’ expectations of student learning (at first, students had difficulty understanding… rubric was refined), (2) formative feedback (and students were more focused to revise their work, more confident and focused on learning), (3) promoting thinking skills (especially during discussion when introducing the rubrics before the assessment task), (4) peer-assessment (students reacted positively to PA), and (5) self-assessment (students tried this, but were not terribly comfortable with it, as this was a new idea for them).
Kennedy and Shiel (2022) Ireland Pre-K to grade 2 Case study and rubric trialing 33 class-rooms,
4 coaches, writing from 337 students
Analytical writing rubrics, 5 criteria, 7 levels (written for teachers) Rubric reliability, validity (internal structure & construct), teacher use [Teacher use] Rubric challenged teachers to reconsider their thinking about writing, often broadening it. Helped teachers see young children could “write” if drawing and invented spelling counted as writing. Rubrics were also valuable for mapping progression, as a starting point for instruction, targeting mini-lessons, facilitating differentiation, & as a basis for feedback and conferencing. Teachers need support and encouragement and professional development to use the rubrics.
Kim (2019) South Korea 11th grade Pre-experimental 19 students Analytical writing rubric, 4 criteria, 5 levels Rubric-referenced self-assessment (RRSA) Learning/performance

Perceptions of RRSA
Writing improved in quality, and essays were longer.
Students perceived RRSA effective: students who chose writing ability as the most important aspect of RRSA had higher scores; students who chose self-confidence or motivation as the most important aspect of RRSA had lower scores.
King et al. (2016) United States 5th grade Action research 18 students, 30% EL Analytical mathematical justification writing rubric,
4 criteria, 3 levels
Instruction in writing mathematical justifications, co-creation of rubric, and using it for self-assessment Learning/performance Lesson helped students distinguish between math steps (what you do) and math reasoning (why you do it). Students used their created rubric for self-evaluation (which included self-scoring). Did this 3 times before the post-assessment. Justification scores increased from 3.1 to 6.9 on the 8-point scale, largest increase was in the criterion of mathematical reasoning.
Liu et al. (2016) Taiwan 6th graders Quasi-experimental 53 students Storytelling performance, 5 items (single description of each criterion, no levels) Create and share stories on iPad, one group had the rubric and a guided review process Learning/performance

Creative self-efficacy
Stories created by the peer review group got significantly higher scores, especially for transitions/edits, story planning, and accuracy of information, and were better on each dimension except Camera (under technical quality). Story structure: control group used larger number of setting elements; peer review group used larger number of elements in event, action, and consequence. No significant difference in creative self-efficacy. In peer review group (and not control group), creative self-efficacy was positively correlated with originality and creativity ratings of stories, and positively correlated with drawing ratings in technical quality.
Mahmood and Jacobo (2019) United States 10th and 11th grade Action research 12 students, including 2 gifted, 3 LD/IEP, 2 EL, 2 foster youth Sliding scale rubrics, 4 levels (showing just their current and the next level to a student) Grading math portfolios on growth Learning/performance

Attitudes toward learning, success, & grading
Some students were hesitant but most students in the post survey said they felt motivated to improve with the new system. First iteration: 6 students showed positive growth, 2 had lower scores, 4 had same score as previous portfolio. Second iteration: 3 improved, 4 had lower scores, 5 no growth. Strategies used to promote growth included internal benchmarks, opportunities for peer feedback, extra time, clearly articulating look-fors in the rubric, and praise.
Nsabayezu et al. (2022) Rwanda 12-13-year-olds Case study 158 students Analytical organic chemistry rubrics Rubric use for formative assessment (FA) Satisfaction, motivation from using FA rubrics Students reported satisfaction with rubric use and motivation. Students affirmed that rubrics were effective for their learning and affirmed the use of technology was helpful. Students only used the technology for teaching and learning, not assessment, and some teachers did not share the rubrics with students in a timely manner for formative assessment. Teachers asserted the rubrics helped them to clarify what they needed to look for in students’ work and facilitated grading.
Safadi (2017) Israel (Arab sector) 8th grade Quasi-experimental 86 students Point scheme on the steps in a worked physics example Self-diagnosis (SD) both groups: Graded SD used “rubric,” Traditional SD did whole-class discussion of the worked examples Learning/performance Groups were not equivalent to start (discussion group was higher). There was an interaction effect, and the SD group which started lower surpassed the discussion group. SD group used Newton’s 3rd law to problem-solve one of the problems at 32% in 1st exam and 77% in isomorphic repeat exam (cf. 48% / 62% of discussion group).
Safadi and Saadi (2021) Israel (Arab sector) 10th grade Experimental 162 students Point scheme on the steps in a worked physics example Self-diagnosis (SD) both groups: Graded SD used “rubric,” Traditional SD did SD but without rubric Learning/performance GSD group’s performance improved more pre-post than TSD group. Students in GSD group repaired their naïve conceptions more than TSD (analysis of solutions).
Students whose self-scoring was closer to the researchers’ detected their errors more, learned more from them, and performed better on the repeat exam.
Smit et al. (2017) Switzerland 5th and 6th grade Quasi-experimental 762 students in 44 classes Analytical mathematical reasoning rubric, 4 criteria, 4 levels Instruction in mathematical reasoning, with rubric or without Learning/performance

Self-regulation, self-efficacy

Teacher diagnostic skills, formative feedback
(Longitudinal SEM models) Rubric helped teachers’ perceive diagnostic skills and indirectly affected their use of formative feedback. Direct effect of the rubric on students’ formative feedback and student self-assessment. No significant effects on students’ outcomes, but outcome effects may be mediated by self-regulation and self-efficacy.

Source: Author’s elaboration.

Note that none of the studies investigated the use of rubrics without a context. In every study, rubric use was preceded by some sort of introduction to or training on the rubrics, processing activities to familiarize students with the rubrics (e.g., guided review, peer-assessment, self-assessment), and/or instruction based on the rubrics. This means that a review of studies isolating rubrics from other potential factors affecting learning, perceptions, and/or motivation was not possible. It also foreshadows one of the recommendations to be made, which is that rubrics are a tool that must be used in the context of formative assessment strategies to be effective. It has been noted before (Brookhart & Chen, 2015; Jonsson, 2010) that, to make criteria transparent to students, both high-quality rubrics (or some other tool) and effective procedures for using them are needed. Just handing out rubrics to students without any processing is not expected to make a difference.

RESULTS

The 17 studies used in the present review were conducted in 11 countries: Australia, Brunei, Ireland, Israel (2 studies), South Korea, Philippines, Rwanda, Spain, Switzerland, Taiwan (2), and the United States (5). Six studies were conducted at the elementary level (grades K-6), ten at the secondary level (grades 7-12 or equivalent), and one at varying K-12 grade levels. The matter that the rubrics approached included critical thinking and problem-solving, writing, the arts (visual arts, theater, music, drama), mathematics, science, and history. In all, the 17 studies comprise data from 3,484 students.

All studies were published in journals. Research designs varied: 1 pre-experimental, 6 quasi-experimental, 3 experimental, 4 case study, and 3 action research. Since a quantitative synthesis (meta-analysis) was not the goal, all designs were retained in the present review to include as broad a base for recommendations as possible.

The rubrics in 12 of the studies fit the definition of true rubrics (having criteria and performance level descriptions); these all were analytical rubrics, that is, the descriptive performance scales considered one criterion at a time. Two studies from one research group (Chen & Andrade, 2018; Chen et al., 2017) considered the use of any criteria-referenced tools, including rubrics and checklists, as long as the teacher used them for formative assessment and provided students opportunities for revision. Two studies from another research group (Safadi, 2017; Safadi & Saadi, 2021) used rating scales based on the steps in worked examples of physics problems. One study (Liu et al., 2016) used a checklist-like set of items describing qualities in students’ stories designed for peer review.

All 17 studies are considered in this review, instead of limiting the pool to just the 12 studies that used true rubrics. This is possible because this is a narrative review, thus allowing discussion of the set of studies without requiring them to meet assumptions that would support a meta-analysis. All 17 studies had something to say about organizing criteria for students and using them to facilitate either formative feedback (by teacher, self, or peer) and/or grading.

Outcomes of interest for using rubrics with students include effects on learning and performance; attitudes and perceptions; and, motivational variables, such as self-efficacy. Most of the studies (13) investigated the effects on learning or performance of rubric use. Four studies looked at effects on motivation, including self-efficacy. Four studies looked at effects of rubrics on students’ attitudes and perceptions. Three studies investigated teacher use of rubrics; they were retained in the pool of studies for the present review because they support the goal of making recommendations to teachers and teacher educators. The sections below describe the findings of the study.

Effects of rubrics on learning and performance

Eleven of the 13 studies that investigated effects of rubrics on learning and performance showed that using rubrics was associated with improved learning or higher performance. This section, therefore, does not focus on the point that rubrics can improve performance, which hardly needs further support (Panadero et al., 2023), but rather on sharing descriptions of the training, activities, or instruction that accompanied the rubric use. In other words, if rubrics are tools, what were these tools used to do?

In all 11 studies showing positive outcomes for learning and performance, students were introduced to rubrics and given opportunities to use them. The process familiarized students with the criteria and performance level descriptions and, therefore, helped them form a concept of what good work looks like. In all 11 studies, the rubrics were used formatively. Ten of those studies explicitly described formative uses: as a framework for instruction (Bradford et al., 2016); as the basis of individual and group activities (Auxtero & Callaman, 2021); and/or as the basis of feedback from teacher, peer, or self (Chen & Andrade, 2018; Chen et al., 2017; Hsia et al., 2016; Kim, 2019; King et al., 2016; Liu et al., 2016; Safadi, 2017; Safadi & Saadi, 2021). One study (Mahmood & Jacobo, 2019) focused on the use of sliding scale rubrics for grading, but the students used the rubrics formatively as they were preparing mathematics portfolios to submit for grading; in other words, they self-assessed as they prepared their portfolios. In this study, “sliding scale” does not mean that the rubric performance level descriptions changed, but that students only saw the descriptions for the levels above and below their current performance. In several studies, it was explicitly concluded, either by author inference or by talking with students, that rubrics helped students understand the teacher’s expectations and/or the elements or components of a particular task. In either case, their attention was focused on the aspects of the task that give evidence of learning (Auxtero & Callaman, 2021; Bradford et al., 2016; Chen et al., 2017; Chen & Andrade, 2018; Kim, 2019).

As the previous paragraph shows, most of the studies that employed rubrics successfully to increase student learning and performance used the rubrics formatively, to support learning while it was happening. Two of the studies (Chen & Andrade, 2018; Chen et al., 2017), using a rigorous design, investigated the effects of implementing criteria-referenced tools (whether rubrics, checklists, or something else) accompanied by teacher, peer, or self-assessment that yielded formative feedback to students and provided the opportunity for revision of work. This combination led to a positive effect on learning and performance in the arts compared with a control group. In this intervention, the salient characteristics of the treatment were providing criteria and ensuring that students used them - often for selfor peer-assessment but also, and certainly, for improving their work. This combination of criteria and certain student use turns out to be a thread through all of the studies that reported improved learning and performance.

Using rubrics for peer-assessment (peer feedback) figures in two more of the successful studies, both in the context of web-based learning. Hsia et al. (2016), using a quasi-experimental design, studied web-based assessment in a junior high school performing arts course where the students wrote and videotaped short dramas based on Chinese folk stories. Both groups were given the rubric, but only one group participated in a peer review process. The control group also had the rubric but there was no guarantee any of the students used it to view their group’s video performances, as the treatment group had to do. Liu et al. (2016), also using a quasi-experimental design, investigated the effects of peer review using a checklist of five criteria presented in sentence form (e.g., “The story has a vivid background, events, actions, and ending,” p. 289) on an iPad storytelling activity called “Saving the Forest”. The control group could view and discuss the stories of others but did not use the criteria. Again, the common thread seems to be clear presentation of criteria linked to some activity that ensured students used the criteria.

Four other studies displayed this combination of presenting criteria clearly and ensuring students used the criteria purposefully. Two of those studies were descriptive (no control group) and used true rubrics. Kim (2019) found that high school students who used rubric-referenced self-assessment wrote essays that were longer and of higher quality after using the rubric. King et al. (2016) found that co-creating a rubric for writing mathematical justifications, and then using their rubric for self-assessment that included both feedback to themselves and self-scoring, helped 5th graders distinguish between simply presenting the steps to solve a math problem and explaining the reasoning behind those steps, which had been an issue for them before the lessons that used the rubric.

Two additional studies from one research group (Safadi, 2017; Safadi & Saadi, 2021) used a rigorous design to test the effectiveness of an instrument they called a rubric that supported a self-diagnosis activity. After solving force and motion problems (Safadi, 2017) or geometric optics problems (Safadi & Saadi, 2021), students in the treatment group reviewed a complete, worked example for each of the problems, along with instructions to assign themselves a score, according to directions, for each step in the problem. Students in the control group experienced whole-class discussion of the worked examples without the self-diagnosis and scoring activity. Again, what was tested was the clear presentation of criteria and an activity that ensured the students needed to engage with those criteria.

Only 2 studies that investigated the effects of rubrics on learning and performance did not find positive results, and it is not possible to generalize from the results of these studies. However, it is worth noting that, in one of those studies (Hubber et al., 2022), students were given a rubric only during the final week of a three-week interdisciplinary science and mathematics unit, when their culminating group project (the marble run challenge) was already in progress. By then, the students were past the design phase and into the construction and testing phase of their project constructing marble runs. Hubber et al. (2022, p. 16): “A rubric . . . was introduced in the final week and students were asked to review their work against the rubric and teachers highlighted the need for students to self-assess their work and the teamwork of the group.” No guided activities were described for the students, so presumably the students’ use of the rubric could have been perfunctory, or they could even have ignored the rubric.

In the other study, that was also not able to establish an association between rubric use and improved performance (Smit et al., 2017), students did participate in selfand peer-assessment. The use of rubrics had direct effects on their perceptions of formative feedback and self-assessment, and indirect effects on self-regulation and self-efficacy. The authors suggested that these effects might have mediating effects on outcomes, that is, improvement in self-assessment might ultimately lead to improved learning and performance.

Effects of rubrics on student motivation

Three of the 4 studies that looked at the association between rubric use and student motivation found positive outcomes (Hsia et al., 2016; Nsabayezu, 2022; Smit it al., 2017), and the fourth found no difference (Liu et al., 2016). The 3 studies stating positive motivational outcomes found rubric use was associated with increased student self-efficacy specific to rubric use. Hsia et al. (2017) showed in a quasi-experimental study that for students who participated in online peer-assessment, self-efficacy for peer-assessment, especially for evaluating peers’ work and for receiving peers’ feedback, was related to performance. Smit et al. (2017) used a quasi-experimental design and showed, with causal modeling, that rubric use had a direct effect on students’ perceptions of their peerand self-assessment skills and formative feedback, as well as indirect effects on both self-efficacy and self-regulation. In a descriptive case study, Nsabayezu (2022) reported students expressed general satisfaction and motivation for rubric use.

The one study that found no difference in self-efficacy, between students who used rubrics and those who did not, used peer-assessment with checklist-style items about creating stories using iPad. The study measured creative self-efficacy, defined as students’ confidence in creating novel works (Liu et al., 2016, p. 287) and not as self-efficacy for using rubrics. While the peer review group got significantly higher scores on their stories, on almost every dimension of the rubric, there was no significant difference in creative self-efficacy between the peer review group and the control group. It is worth pointing out that creative self-efficacy is a broader construct than self-efficacy for using rubrics, and may require more than one encounter to change.

Effects of rubrics on student attitudes and perceptions

Student attitudes and perceptions affect their engagement with any instructional tool or strategy. Four studies reported on student attitudes and perceptions, but the focus of the attitude questions differed (e.g., attitude toward the content taught, attitude toward rubrics, or both). In a quasi-experimental study, Bradford et al. (2016) found no difference between the rubric and non-rubric groups on attitudes toward writing, but the primary (grades 1-2) students’ written reasons for their attitude ratings revealed they thought using rubrics allowed them to finish their opinion paragraphs quickly and think they performed well in some writing skill.

On balance, in descriptive (non-experimental) studies students reported a generally positive view of rubrics and some negative attitudes, as well. Teachers in Gallego-Arrufat and Dandis’s (2014) study said using a rubric enhanced secondary students’ engagement and learning, as well as their own teaching; but, that the lack of experience with rubrics, the length and complexity of the rubric, and the amount of time needed to use this new tool created difficulties for both teachers and students. The authors reported some resistance regarding the subject matter as these students were in their third semester, had never done mathematical explanations before, and some complained about having to learn this new content. Kim (2019) found secondary students came to perceive that rubric-referenced self-assessment was effective and that it positively affected their attitudes toward writing. Also working with secondary students, Mahmood and Jacobo (2019) found that using sliding-scale rubrics led most students to report positive attitudes and motivation to improve, but some students were hesitant.

Teacher use of rubrics

Findings about teacher attitudes, opinions, and experiences with the use of rubrics do not speak directly to student outcomes, but they do speak directly to implementing rubrics in the classroom. Implementation is a necessary step before any educational tool or strategy can have an effect on students and their learning. Three studies collected interview and diary (Gallego-Arrufat & Dandis, 2014) data from teachers or their coaches (Kennedy & Shiel, 2022), or students (Idris et al., 2017) about the use of rubrics in the classroom.

Teachers use rubrics to clarify expectations (Gallego-Arrufat & Dandis, 2014; Idris et al., 2017; Kennedy & Shiel, 2022). When rubrics are clear, students do use those rubrics to understand what is expected of them (Gallego-Arrufat & Dandis, 2014; Idris et al., 2017; Kennedy & Shiel, 2022). Teachers also use rubrics in formative assessment, for example: for giving feedback for selfand peer-assessment (Idris et al., 2017; Kennedy & Shiel, 2022), for increasing the objectivity of summative evaluation (grading; Gallego-Arrufat & Dandis, 2014), for promoting student thinking skills (Idris et al., 2017), and for planning their lessons (Gallego-Arrufat & Dandis, 2014; Kennedy & Shiel, 2022). If the rubric is designed to support it, teachers can use rubrics to identify students’ developmental level on a skill like writing (Kennedy & Shiel, 2022).

Two main difficulties or obstacles to using rubrics were reported. One was the amount of time it took to learn how to use rubrics, for those for whom they were new (Gallego-Arrufat & Dandis, 2014). A second obstacle was that the design of the rubric and the language needed to be understandable to students (Gallego-Arrufat & Dandis, 2014).

DISCUSSION

The 17 studies of the use of rubrics in basic education (grades K-12 or equivalent), in the present review, lead to the same conclusion that previous reviews of studies of rubrics in higher education or mixed higher and basic education have found (Brookhart & Chen, 2015; Jonsson & Svingby, 2007; Panadero & Jonsson, 2013; Panadero et al., 2023; Reddy & Andrade, 2010). Panadero et al. (2023) reported a moderate positive effect of rubrics on academic performance, and smaller effects of rubrics on self-regulation of learning and self-efficacy. In the present study, focused on rubrics specifically in basic education, 11 of 13 studies reported positive effects on learning and performance across a variety of study designs, grade levels, and subject areas. Three of 4 studies reported positive effects on self-efficacy. The likely reason is that rubrics make the qualities of good work explicit for formative assessment and feedback; and, they make final expectations explicit for summative assessment and grading (Brookhart, 2018; Brookhart & Chen, 2015; Jonsson & Svingby, 2007).

A narrative review of the studies showed that successful rubric interventions featured: clear criteria related to the learning or performance goal; and, importantly, some introduction or instruction about the rubrics coupled with an activity that ensured the students had to engage with the rubrics in some detail. It is worth noting that, for many learning goals, learning the criteria amounts to learning the content. For example, a writing rubric using the criteria ideas, organization, word choice, sentence fluency, conventions, and voice (Bradford et al., 2016) communicates to students these attributes of effective writing. Or, a mathematical justification rubric using the criteria mathematical language, mathematical steps, mathematical reasoning, and solution in context (King et al., 2016) communicates to students these attributes of effective mathematical justification. In both cases, the criteria are exactly what the students are trying to learn.

One of the studies in this review, using a longitudinal model, showed that the formative use of rubrics during learning directly affected students’ perceptions of peerand self-assessment and of the formative assessment process more generally, as well as indirectly affected both self-regulation and self-efficacy. In other words, students who engage with clear learning criteria have confidence in, and understand the process of, their learning. Rubrics are tools, and they work by clarifying the criteria for good work and supporting student engagement in the formative assessment process. Of course, in order for that to happen, the criteria themselves need to be appropriate and understandable, and students have to engage with them.

RECOMMENDATIONS FOR USING RUBRICS

Several recommendations for using rubrics in basic education follow from these findings. Classroom teachers, their administrators and supervisors, and teacher education faculty may find these recommendations useful.

  1. Ensure that rubrics are of high quality.

  2. Plan activities that actively engage students with the rubrics.

  3. Use rubrics to connect formative assessment and grading.

Ensure that rubrics are of high quality

Rubrics that are of high quality are based on clear, appropriate criteria that are drawn from the learning goal they are meant to assess, not the task (Andrade, 2000; Brookhart, 2013; McTighe & Frontier, 2022). For example, in a writing rubric, the criteria ideas, organization, word choice, sentence fluency, conventions, and voice (Bradford et al., 2016) are derived from the characteristics of effective writing that the students are to be learning, not from the characteristics of the task (e.g., the rubric did not include a criterion like “took a position about the best kind of pizza”). Such criteria identify evidence of learning across a range of similar tasks (e.g., an opinion piece about the student’s favorite subject in school, or about the best flavor of ice cream). This, in turn, helps clarify the learning goal for students, give meaning to the task, and focus students on improving (in this case) their writing rather than completing one assignment.

Rubrics that are of high quality feature performance-level descriptions that cover the continuum of work quality from low to high. Even if a teacher does not expect any student work to be located at a particular level, for example if the teacher does not think that any work will be of the lowest quality level on the performance scale, the description should still be there to communicate the range of performance and to give students a sense of where on that range their current performance falls. Teachers can help students review illustrative examples of work at the various levels to help them develop a concept of what the criterion means (Andrade, 2000; Brookhart, 2013). Performance level descriptions should be understandable to the students and teachers who will use them; otherwise, they will not be useful. In pursuit of this goal, use student-friendly language - employ both words and phrases that students can hear themselves using - and use nouns often instead of pronouns (e.g., “my organization,” not “it”).

Actively engage students with the rubrics

Successful studies in this review used various strategies to actively engage students with rubrics, including: giving students practice using the rubrics individually or in groups, on prepared work samples; basing teacher feedback on the criteria and providing students opportunities for revision; involving students in rubric-referenced selfor peer-assessment; and, using rubrics as a framework for planning instruction. These active strategies are consistent with other recommendations (e.g., Andrade, 2000; Brookhart, 2013; McTighe & Frontier, 2022; Panadero et al., 2016).

Simply distributing a rubric among students does not guarantee they will read it, much less understand it. Auxtero and Callaman (2021) planned a group activity in which students in the treatment group were familiarized with the rubric, then an activity was done in pairs for additional practice using the rubric. Other ways to introduce rubrics to students include having them ask questions, having them rewrite the performance level descriptions in their own words, or having them sort and match sample student work (or mocked-up sample work) to the performance level descriptions for each criterion. For learning goals with which students are already somewhat familiar (e.g., mathematical justification, King et al., 2016), students can co-create the rubrics with the teacher.

Once the rubric is familiar to the students and their conception of the learning goal is beginning to be shaped by it, students can use the rubric for peeror self-assessment, as was done in all of the successful studies in the present review. Students will need instruction on how to match qualities in the work with qualities described in the rubric. One strategy is to use a highlighter to identify the evidence in the work with the same color as the description in the rubric, perhaps working in pairs. For example, if a mathematical problem-solving rubric specifies that the student should write an equation modelling the problem, then the students would highlight the equation in the work and “write an equation modelling the problem” in the rubric with the same color.

For self-assessment, students match the descriptions in the rubric with their own work. Then, they use the description in the next level of the rubric to set a goal for improvement. First, the teacher should make sure the students understand the purpose of the self-assessment, that it is not self-grading but rather reviewing their work in order to make it better. Give students an activity or protocol, for example, using a copy of the rubric with a comments section beside it to make notes. However the self-assessment is structured, students also need an opportunity to use their self-assessment feedback to revise their work (Chen & Andrade, 2018; Chen et al., 2017). Self-assessment feedback is moot if it is not used, and students will soon see it as futile. In contrast, closing the loop by seeing the self-assessment leading to improvement will contribute to students’ self-regulation of learning (Panadero et al., 2016). Give students feedback on the quality of their self-assessment and make sure to explicitly connect the self-assessment they did with the improvement in their learning.

Peer-assessment is similar to self-assessment in that students match descriptions in the rubric with work, but they do this for peers. In addition to clarifying the purpose for the peer-assessment and using clear rubrics, peer-assessment requires some attention to the social nature of this learning strategy. Match the participants into compatible pairs or small groups that can work together. Make sure that students understand they are to assess the work and not the person and, if writing is involved in the peer feedback, to make it clear and helpful to the peer who will receive it. As with self-assessment, give students feedback on the quality of their peer-assessment, give peers an opportunity to revise their work, and make sure to explicitly connect the improvement they see with the peer-assessment they did.

Teachers can use the criteria in a rubric as a framework for planning their instruction in lessons or mini-lessons (Gallego-Arrufat & Dandis, 2014; Idris et al., 2017; Kennedy & Shiel, 2022; Smit et al., 2017). This author has observed lessons where teachers structured their didactic introduction to a lesson or topic by illustrating the criteria and then used the language of the criteria, as they observed and commented, while students were working on individual or group activities. The likely reason that structuring a lesson around the criteria is effective is that, if the criteria are appropriately linked to the learning goal and not the task, student understanding (or skill in demonstrating) the criteria is equivalent to students achieving the learning goal.

Use rubrics to connect formative assessment and grading

Only one study in the present review focused on rubrics used for grading (Mahmood & Jacobo, 2019). It is worth noting that in that study, students used the rubric formatively, self-assessing their mathematics portfolios against the rubrics with which they would be graded. The fact that rubrics can make this connection between formative assessment and summative assessment (grading) is one of the benefits of using rubrics in the classroom, and has been noted before (Brookhart, 2013). There are several ways to connect the criteria used in rubrics-referenced formative assessment to the criteria ultimately used for grading. Sometimes, lessons tackle one criterion at a time. For example, a writing teacher might first focus on the ideas in students’ work, then on word choice, and so on, culminating in the full use of the criteria at the end of a unit. Or, a music teacher might focus on students singing the correct pitch first, then on tempo, then dynamics, culminating in the full use of the criteria at the final performance.

Another way to connect the criteria used in rubrics-referenced formative assessment to the criteria ultimately used for grading is to use “lesson-sized” versions of the criteria for initial instruction. For example, in a primary school lesson, students might focus on the use of periods at the end of sentences, then on the use of question marks at the end of questions and, finally, on the use of exclamation points at the end of exclamations, culminating in a rubric which says, in part, something like, “I put the right end punctuation at the ends of my sentences”.

CONCLUSION

This narrative review of 17 studies of the use of rubrics and other criteria-referenced tools in basic education had two purposes. The first was to review studies only at the K-12 level, because previous reviews were heavily weighted toward studies conducted in higher education. The second was to use these studies as the basis for recommendations that may be useful to classroom teachers and to the teacher education faculty and school administrators who work with them.

The recommendations distilled from the present research review are very much in line with recommendations already found in the professional literature. There are probably two reasons for this. The first is that the authors of studies about rubrics have most likely read and learned from the professional literature before they designed their studies. The second is that rubrics are a tool which, when accompanied by strategies that engage students in the formative learning cycle in their classes, support student self-regulation of learning.

Rubrics effectively present to students one of the most basic necessities for the self-regulation of learning, namely, a clear description of the learning goal and criteria for assessing how close one is to it. In this way, rubrics support improved student learning and performance.

HOW TO CITE:Brookhart, S. M. (2024). Using rubrics in basic education: A review and recommendations. Estudos em Avaliação Educacional, 35, Article e10803. https://doi.org/10.18222/eae.v35.10803

REFERENCES

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18. [ Links ]

Auxtero, L. C., & Callaman, R. A. (2021). Rubric as a learning tool in teaching application of derivatives in basic calculus. Journal of Research and Advances in Mathematics Education, 6(1), 46-58. https://doi.org/10.23917/jramathedu.v6i1.11449Links ]

Bradford, K. L., Newland, A. C., Rule, A. C., & Montgomery, S. E. (2016). Rubrics as a tool in writing instruction: Effects on the opinion essays of first and second graders. Early Childhood Education Journal, 44(5), 463-472. https://doi.org/10.1007/s10643-015-0727-0Links ]

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD. [ Links ]

Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3, Article 22. https://doi.org/10.3389/feduc.2018.00022Links ]

Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343-368. https://doi.org/10.1080/00131911.2014.929565Links ]

Chen, F., & Andrade, H. (2018). The impact of criteria-referenced formative assessment on fifth grade students’ theater arts achievement. Journal of Educational Research, 111(3), 310-319. https://doi.org/10.1080/00220671.2016.1255870Links ]

Chen, F., Lui, A., Andrade, H., Valle, C., & Mir, H. (2017). Criteria-referenced formative assessment in the arts. Educational Assessment, Evaluation, and Accountability, 29(3), 297-314. https://doi.org/10.1007/s11092-017-9259-zLinks ]

Gallego-Arrufat, M. J., & Dandis, M. (2014). Rubrics in a secondary mathematics class. International Electronic Journal of Mathematics Education, 9(1-2), 75-84. https://www.iejme.com/download/rubrics-in-a-secondary-mathematics-class.pdfLinks ]

Hsia, L., Huang, I., & Hwang, G. (2016). A web-based peer-assessment approach to improving junior high school students’ performance, self-efficacy, and motivation in performing arts courses. British Journal of Educational Technology, 47(4), 618-632. https://eric.ed.gov/?id=EJ1103675Links ]

Hubber, P., Widjaja, W., & Aranda, G. (2022). Assessment of an interdisciplinary project in science and mathematics: Opportunities and challenges. Teaching Science, 68(1), 13-25. https://eric.ed.gov/?id=EJ1346068Links ]

Idris, S. H., Jawawi, R., Mahadi, M. A., Matzin, R., Shahrill, M., Jaidin, J. H., Petra, N. A., & Mundia, L. (2017). The use of rubrics in developing students’ understanding of history. Advanced Science Letters, 23(2), 901-904. https://doi.org/10.1166/asl.2017.7432Links ]

Jonsson, A. (2010). The use of transparency in the ‘Interactive examination’ for student teachers. Assessment in Education: Principles, Policy & Practice, 17(2), 183-197. https://doi.org/10.1080/09695941003694441Links ]

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. https://doi.org/10.1016/j.edurev.2007.05.002Links ]

Kennedy, E., & Shiel, G. (2022). Writing assessment for communities of writers: Rubric validation to support formative assessment of writing in Pre-K to grade 2. Assessment in Education: Principles, Policy & Practice, 29(2), 127-149. https://doi.org/10.1080/0969594X.2022.2047608Links ]

Kim, J. (2019). Effects of rubric-referenced self-assessment training on Korean high school students’ English writing. English Teaching, 74(3), 79-111. https://doi.org/10.15858/engtea.74.3.201909.79Links ]

King, B., Raposo, D., & Gimenez, M. (2016). Promoting student buy-in: Using writing to develop mathematical understanding. Georgia Educational Researcher, 13(2), Article 2. https://doi.org/10.20429/ger.2016.130202Links ]

Liu, C. C., Lu, K. H., Wu, L. Y., & Tsai, C. C. (2016). The impact of peer review on creative self- -efficacy and learning performance in Web 2.0 learning activities. Educational Technology & Society, 19(2), 286-297. https://www.jstor.org/stable/jeductechsoci.19.2.286Links ]

Mahmood, D., & Jacobo, H. (2019). Grading for growth: Using sliding scale rubrics to motivate struggling learners. Interdisciplinary Journal of Problem-Based Learning, 13(2), Article 10. https://doi.org/10.7771/1541-5015.1844Links ]

McTighe, J., & Frontier, T. (2022). How to provide better feedback through rubrics. Educational Leadership, 79(7), 17-23. [ Links ]

Nsabayezu, E., Mukiza, J., Iyamuremye, A., Mukamanzi, O. U., & Mbonyiryivuze, A. (2022). Rubric based formative assessment to support students’ learning of organic chemistry in the selected secondary schools in Rwanda: A technology based learning. Education and Information Technologies, 27, 12251-12271. https://doi.org/10.1007/s10639-022-11113-5Links ]

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129-144. https://doi.org/10.1016/j.edurev.2013.01.002Links ]

Panadero, E., Jonsson, A., Pinedo, L., & Fernández-Castilla, B. (2023). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: A meta-analytic review. Educational Psychology Review, 35, Article 113. https://doi.org/10.1007/s10648-023-09823-4Links ]

Panadero, E., Jonsson, A., & Strijbos, J.-W. (2016). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In D. Laveault, & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 311-326). Springer. [ Links ]

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. https://doi.org/10.1080/02602930902862859Links ]

Safadi, R. (2017). Self-diagnosis as a tool for supporting students’ conceptual understanding and achievements in physics: The case of 8th-graders studying force and motion. Physics Education, 52(1), Article 14002. https://doi.org/10.1088/1361-6552/52/1/014002Links ]

Safadi, R., & Saadi, S. (2021). Learning from self-diagnosis activities when contrasting students’ own solutions with worked examples: The case of 10th graders studying geometric optics. Research in Science Education, 51, 523-546. https://doi.org/10.1007/s11165-018-9806-8Links ]

Smit, R., Bachmann, P., Blum, V., Birri, T., & Hess, K. (2017). Effects of a rubric for mathematical reasoning on teaching and learning in primary school. Instructional Science, 45(5), 603-622. https://doi.org/10.1007/s11251-017-9416-2Links ]

Received: December 21, 2023; Accepted: January 10, 2024

Creative Commons License Este é um artigo publicado em acesso aberto sob uma licença Creative Commons.