by Craig A. Mertler
Bowling Green State University
Rubrics are rating scales-as opposed to checklists-that are used with performance assessments. They are formally defined as scoring guides, consisting of specific pre-established performance criteria, used in evaluating student work on performance assessments. Rubrics are typically the specific form of scoring instrument used when evaluating student performances or products resulting from a performance task.
There are two types of rubrics: holistic and analytic (see Figure 1). A holistic rubric requires the teacher to score the overall process or product as a whole, without judging the component parts separately (Nitko, 2001). In contrast, with an analytic rubric, the teacher scores separate, individual parts of the product or performance first, then sums the individual scores to obtain a total score (Moskal, 2000; Nitko, 2001).
Figure 1: Types of scoring instruments for performance assessments |
Table 1: Template for Holistic Rubrics | |
Score | Description |
5 | Demonstrates complete understanding of the problem. All requirements of task are included in response. |
4 | Demonstrates considerable understanding of the problem. All requirements of task are included. |
3 | Demonstrates partial understanding of the problem. Most requirements of task are included. |
2 | Demonstrates little understanding of the problem. Many requirements of task are missing. |
1 | Demonstrates no understanding of the problem. |
0 | No response/task not attempted. |
Table 2: Template for analytic rubrics | |||||
Beginning 1 | Developing 2 | Accomplished 3 | Exemplary 4 | Score | |
Criteria #1 | Description reflecting beginning level of performance | Description reflecting movement toward mastery level of performance | Description reflecting achievement of mastery level of performance | Description reflecting highest level of performance | |
Criteria #2 | Description reflecting beginning level of performance | Description reflecting movement toward mastery level of performance | Description reflecting achievement of mastery level of performance | Description reflecting highest level of performance | |
Criteria #3 | Description reflecting beginning level of performance | Description reflecting movement toward mastery level of performance | Description reflecting achievement of mastery level of performance | Description reflecting highest level of performance | |
Criteria #4 | Description reflecting beginning level of performance | Description reflecting movement toward mastery level of performance | Description reflecting achievement of mastery level of performance | Description reflecting highest level of performance |
Prior to designing a specific rubric, a teacher must decide whether the performance or product will be scored holistically or analytically (Airasian, 2000 & 2001). Regardless of which type of rubric is selected, specific performance criteria and observable indicators must be identified as an initial step to development. The decision regarding the use of a holistic or analytic approach to scoring has several possible implications. The most important of these is that teachers must consider first how they intend to use the results. If an overall, summative score is desired, a holistic scoring approach would be more desirable. In contrast, if formative feedback is the goal, an analytic scoring rubric should be used. It is important to note that one type of rubric is not inherently better than the other-you must find a format that works best for your purposes (Montgomery, 2001). Other implications include the time requirements, the nature of the task itself, and the specific performance criteria being observed.
As you saw demonstrated in the templates (Tables 1 and 2), the various levels of student performance can be defined using either quantitative (i.e., numerical) or qualitative (i.e., descriptive) labels. In some instances, teachers might want to utilize both quantitative and qualitative labels. If a rubric contains four levels of proficiency or understanding on a continuum, quantitative labels would typically range from "1" to "4." When using qualitative labels, teachers have much more flexibility, and can be more creative. A common type of qualitative scale might include the following labels: master, expert, apprentice, and novice. Nearly any type of qualitative scale will suffice, provided it "fits" with the task.
One potentially frustrating aspect of scoring student work with rubrics is the issue of somehow converting them to "grades." It is not a good idea to think of rubrics in terms of percentages (Trice, 2000). For example, if a rubric has six levels (or "points"), a score of 3 should not be equated to 50% (an "F" in most letter grading systems). The process of converting rubric scores to grades or categories is more a process of logic than it is a mathematical one. Trice (2000) suggests that in a rubric scoring system, there are typically more scores at the average and above average categories (i.e., equating to grades of "C" or better) than there are below average categories. For instance, if a rubric consisted of nine score categories, the equivalent grades and categories might look like this:
Table 3: Sample grades and categories | ||
Rubric Score | Grade | Category |
8 |
| Excellent |
7 |
| Excellent |
6 |
| Good |
5 |
| Good |
4 |
| Fair |
3 |
| Fair |
2 |
| Unsatisfactory |
1 |
| Unsatisfactory |
0 |
| Unsatisfactory |
When converting rubric scores to grades (typical at the secondary level) or descriptive feedback (typical at the elementary level), it is important to remember that there is not necessarily one correct way to accomplish this. The bottom line for classroom teachers is that they must find a system of conversion that works for them and fits comfortably into their individual system of reporting student performance.
Steps in the Design of Scoring Rubrics
A step-by-step process for designing scoring rubrics for classroom use is presented below. Information for these procedures was compiled from various sources (Airasian, 2000 & 2001; Mertler, 2001; Montgomery, 2001; Nitko, 2001; Tombari & Borich, 1999). The steps will be summarized and discussed, followed by presentations of two sample scoring rubrics.
Step 1: | Re-examine the learning objectives to be addressed by the task. This allows you to match your scoring guide with your objectives and actual instruction. |
Step 2: | Identify specific observable attributes that you want to see (as well as those you don’t want to see) your students demonstrate in their product, process, or performance. Specify the characteristics, skills, or behaviors that you will be looking for, as well as common mistakes you do not want to see. |
Step 3: | Brainstorm characteristics that describe each attribute. Identify ways to describe above average, average, and below average performance for each observable attribute identified in Step 2. |
Step 4a: | For holistic rubrics, write thorough narrative descriptions for excellent work and poor work incorporating each attribute into the description. Describe the highest and lowest levels of performance combining the descriptors for all attributes. |
Step 4b: | For analytic rubrics, write thorough narrative descriptions for excellent work and poor work for each individual attribute. Describe the highest and lowest levels of performance using the descriptors for each attribute separately. |
Step 5a: | For holistic rubrics, complete the rubric by describing other levels on the continuum that ranges from excellent to poor work for the collective attributes. Write descriptions for all intermediate levels of performance. |
Step 5b: | For analytic rubrics, complete the rubric by describing other levels on the continuum that ranges from excellent to poor work for each attribute. Write descriptions for all intermediate levels of performance for each attribute separately. |
Step 6: | Collect samples of student work that exemplify each level. These will help you score in the future by serving as benchmarks. |
Step 7: | Revise the rubric, as necessary. Be prepared to reflect on the effectiveness of the rubric and revise it prior to its next implementation. |
These steps involved in the design of rubrics have been summarized in Figure 2.
Figure 2: Designing Scoring Rubrics: Step-by-step procedures |
Two Examples
Two sample scoring rubrics corresponding to specific performance assessment tasks are presented next. Brief discussions precede the actual rubrics. For illustrative purposes, a holistic rubric is presented for the first task and an analytic rubric for the second. It should be noted that either a holistic or an analytic rubric could have been designed for either task.
Example 1:
Subject - Mathematics
Grade Level(s) - Upper Elementary
Mr. Harris, a fourth-grade teacher, is planning a unit on the topic of data analysis, focusing primarily on the skills of estimation and interpretation of graphs. Specifically, at the end of this unit, he wants to be able to assess his students' mastery of the following instructional objectives:Subject - Mathematics
Grade Level(s) - Upper Elementary
- Students will properly interpret a bar graph.
- Students will accurately estimate values from within a bar graph. (step 1)
Table 4: Math Performance Task – Scoring Rubric Data Analysis | ||
Name _____________________________ | Date ___________ | |
Score | Description | |
4 | Makes accurate estimations. Uses appropriate mathematical operations with no mistakes. Draws logical conclusions supported by graph. Sound explanations of thinking. | |
3 | Makes good estimations. Uses appropriate mathematical operations with few mistakes. Draws logical conclusions supported by graph. Good explanations of thinking. | |
2 | Attempts estimations, although many inaccurate. Uses inappropriate mathematical operations, but with no mistakes. Draws conclusions not supported by graph. Offers little explanation. | |
1 | Makes inaccurate estimations. Uses inappropriate mathematical operations. Draws no conclusions related to graph. Offers no explanations of thinking. | |
0 | No response/task not attempted. |
Example 2:
Subjects - Social Studies; Probability & Statistics
Grade Level(s) - 9 - 12
Mrs. Wolfe is a high school American government teacher. She is beginning a unit on the electoral process and knows from past years that her students sometimes have difficulty with the concepts of sampling and election polling. She decides to give her students a performance assessment so they can demonstrate their levels of understanding of these concepts. The main idea that she wants to focus on is that samples (surveys) can accurately predict the viewpoints of an entire population. Specifically, she wants to be able to assess her students on the following instructional objectives:Subjects - Social Studies; Probability & Statistics
Grade Level(s) - 9 - 12
- Students will collect data using appropriate methods.
- Students will accurately analyze and summarize their data.
- Students will effectively communicate their results. (step 1)
Table 5: Performance Task – Scoring Rubric Population Sampling | |||||
Name ____________________________ | Date ________________ | ||||
Beginning 1 | Developing 2 | Accomplished 3 | Exemplary 4 | Score | |
Sampling Technique | Inappropriate sampling technique used | Appropriate technique used to select sample; major errors in execution | Appropriate technique used to select sample; minor errors in execution | Appropriate technique used to select sample; no errors in procedures | |
Survey/ Interview Questions | Inappropriate questions asked to gather needed information | Few pertinent questions asked; data on sample is inadequate | Most pertinent questions asked; data on sample is adequate | All pertinent questions asked; data on sample is complete | |
Statistical Analyses | No attempt at summarizing collected data | Attempts analysis of data, but inappropriate procedures | Proper analytical procedures used, but analysis incomplete | All proper analytical procedures used to summarize data | |
Communication of Results | Communication of results is incomplete, unorganized, and difficult to follow | Communicates some important information; not organized well enough to support decision | Communicates most of important information; shows support for decision | Communication of results is very thorough; shows insight into how data predicted outcome | |
Total Score = ____ |
Resources for Rubrics on the Web
The following is just a partial list of some Web resources for information about and samples of scoring rubrics.
- "Scoring Rubrics: What, When, & How?" (http://pareonline.net/getvn.asp?v=7&n=3). This article appears in Practical Assessment, Research, & Evaluation and is authored by Barbara M. Moskal. The article discusses what rubrics are, and distinguishes between holistic and analytic types. Examples and additional resources are provided.
- "Performance Assessment-Scoring" (http://www.pgcps.pg.k12.md.us/~elc/scoringtasks.html). Staff in the Prince George's County (MD) Public Schools have developed a series of pages that provide descriptions of the steps involved in the design of performance tasks. This particular page provides several rubric samples.
- "Rubrics from the Staff Room for Ontario Teachers" ( http://www.quadro.net/~ecoxon/Reporting/rubrics.htm ) This site is a collection of literally hundreds of teacher-developed rubrics for scoring performance tasks. The rubrics are categorized by subject area and type of task. This is a fantastic resource…check it out!
- "Rubistar Rubric Generator" (http://rubistar.4teachers.org/)
- "Teacher Rubric Maker" (http://www.teach-nology.com/web_tools/rubrics/) These two sites house Web-based rubric generators for teachers. Teachers can customize their own rubrics based on templates on each site. In both cases, rubric templates are organized by subject area and/or type of performance task. These are wonderful resources for teachers!
Airasian, P. W. (2000). Assessment in the classroom: A concise approach (2nd ed.). Boston: McGraw-Hill.Contact information
Airasian, P. W. (2001). Classroom assessment: Concepts and applications (4th ed.). Boston: McGraw-Hill.
Chase, C. I. (1999). Contemporary assessment for educators. New York: Longman.
Mertler, C. A. (2001). Using performance assessment in your classroom. Unpublished manuscript, Bowling Green State University.
Montgomery, K. (2001). Authentic assessment: A guide for elementary teachers. New York: Longman.
Moskal, B. M. (2000). Scoring rubrics: what, when, and how?. Practical Assessment, Research, & Evaluation, 7(3). Available online: http://pareonline.net/getvn.asp?v=7&n=3
Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Upper Saddle River, NJ: Merrill.
Tombari, M. & Borich, G. (1999). Authentic assessment in the classroom: Applications and practice. Upper Saddle River, NJ: Merrill.
Trice, A. D. (2000). A handbook of classroom assessment. New York: Longman.
Craig A. Mertler
Educational Foundations & Inquiry Program
College of Education & Human Development
Bowling Green State University
Bowling Green, OH 43403
mertler@bgnet.bgsu.edu
Phone: 419-372-9357 Fax: 419-372-8265
0 komentar:
Posting Komentar