Abstrak
Yerri Kardiana. 2011. Using Pictures to Improve the Speaking Ability of the First Grade Students in SMKN 1 Kediri in Describing Things and People. Thesis. Graduate Program in English Language Education. State University of Malang. Advisors: (I) Dr. Enny Irawati, M. Pd. (II) Drs. Fachrrurazy, M. A., Ph. D.
Key words: using pictures, describing things and people, improve, speaking ability.
This research employed a classroom-action-research. Considering the fact that the students' speaking ability is still poor, the present researcher plans to promote a new technique that will improve the students' speaking ability. In addition, this technique should be interesting so that the students are motivated to engage communication in speaking activities based on the task independently. In this case, a picture is a suitable technique that can be used to overcome the existing problem. The preliminary study to find the speaking problems have been carried out. Classroom action-research design was used to investigate how the students' speaking ability can be improved using Pictures in Describing Things and People. The research problem is: How can the speaking ability of the first grade students of SMKN 1 Kediri be improves by using Pictures in Describing Things and People?" The procedure of the research consisted of the four main steps i.e. planning, implementing, observing and reflecting. This research was conducted in two cycles. Each cycle comprised of three meetings. The instruments for collecting the data were observation checklist, field notes, and speaking assessment.
The finding of the research indicated that Using Pictures in Describing Things and People was successful in improving the students' speaking ability. The improvement could be seen from the increase of students' speaking assessment from pre test to cycle one and cycle two. In cycle one, 60.71% of the students got poor score, 21.43% of the students got very poor score , 40.62% of the students got average score, and 3.12% of the students got good score. In cycle two, 50% of the students got average score, 28.12% of the students got good score and 6.25% of the students got very good score. Besides, the finding showed that using pictures in describing Things and People was effective in improving the students' involvement in the teaching and learning process. Implementing pictures in describing Things and People for teaching speaking involved the following steps: (1) showing a set of pictures to the students, (2) describing the things and people based on the pictures, (3) asking questions to the students to check the students' understanding about the story, (4) describing the pictures once more to make clear what the pictures are about while writing the description of things and people on the blackboard to become a model description for the students, (5) dividing the students into five groups, (6) handing out six sets of pictures to each group of the students, (7) giving the students guided vocabulary related to the pictures, (8) practicing the pronunciation of the vocabulary, (9) asking the students to make brief notes of their descriptions based on the pictures, (10) asking the students to describe the things and people based on pictures, (11) correcting the students' pronunciation and grammar usage, and (12) asking the students to describe people with the real pictures (famous people).
Based on the findings, it can be concluded that using pictures in describing things and people can be used to improve both the students' speaking ability and students' involvement in the teaching and learning process. Therefore, it is suggested to the English teachers to implement it as an alternative technique in their English class particularly in the speaking class. For the principal, it is suggested to provide facilities to improve the English teachers' teaching quality by making a policy of cooperating with some experts to hold an in-service training about teaching methods for English teachers. For other researchers, it is suggested to conduct other researches by implementing the pictures in describing things and people in other school levels and for other language skills such as listening and writing.
Speaking Assessment Grade Sheet
Diposting oleh
LA
the sheet for assessment is commonly used by Abu Dhabi IELTS
find the full version
find the full version
CONSISTENCY IN CLASSROOM ASSESSMENT
Diposting oleh
LA
Selasa, 04 Oktober 2011
find this full article
The Ontario Ministry of Education views the consistent application of classroom assessment practices as being of critical importance to fostering student success in Ontario schools. This manual, “Consistency in Classroom Assessment – Support Materials for Educators”, has been developed with the purpose of providing suggestions to maintain and further improve consistency in the classroom assessment of students.
The support materials reflected in this manual are intended for the use of a broad cross-section of Ontario educators. Furthermore these support materials have been developed using sound and relevant research and are informed by feedback from educators and students gathered in a series of focus groups held across the province.
This manual contains:
• a summary of relevant research on the topic of consistency in student assessment
• suggestions for teachers on improving consistency in classroom assessment
• suggestions for principals on improving assessment consistency among teachers
• suggestions for supervisory officers on improving consistency among schools
• suggestions to improve consistency of assessment in the final year of secondary
school
The manual is not a review of specific practices and suggestions that are found in companion documents such as “Policy to Practice”. Instead, this manual is designed to be an “active document” which will be used frequently to provide teachers, principals and supervisoryoffice rs with practical, relevant suggestions in promoting student success by the use and application of consistent and fair assessment practices.
Finally, this manual is intended to engage educators in a meaningful and healthy conversation on the importance of consistent classroom assessment and how as an Ontario educational community, we can further enrich the classroom experience for our students.
The Ontario Ministry of Education views the consistent application of classroom assessment practices as being of critical importance to fostering student success in Ontario schools. This manual, “Consistency in Classroom Assessment – Support Materials for Educators”, has been developed with the purpose of providing suggestions to maintain and further improve consistency in the classroom assessment of students.
The support materials reflected in this manual are intended for the use of a broad cross-section of Ontario educators. Furthermore these support materials have been developed using sound and relevant research and are informed by feedback from educators and students gathered in a series of focus groups held across the province.
This manual contains:
• a summary of relevant research on the topic of consistency in student assessment
• suggestions for teachers on improving consistency in classroom assessment
• suggestions for principals on improving assessment consistency among teachers
• suggestions for supervisory officers on improving consistency among schools
• suggestions to improve consistency of assessment in the final year of secondary
school
The manual is not a review of specific practices and suggestions that are found in companion documents such as “Policy to Practice”. Instead, this manual is designed to be an “active document” which will be used frequently to provide teachers, principals and supervisoryoffice rs with practical, relevant suggestions in promoting student success by the use and application of consistent and fair assessment practices.
Finally, this manual is intended to engage educators in a meaningful and healthy conversation on the importance of consistent classroom assessment and how as an Ontario educational community, we can further enrich the classroom experience for our students.
An Introduction to Classroom Assessment Techniques
Diposting oleh
LA
by Diane M. Enerson, Kathryn M. Plank, and R. Neill Johnson
find this article here
Background knowledge probes can be used at the beginning of a course, at the start of a new unit or lesson, or prior to introducing an important new topic. Once collected and analyzed, the data can be extremely useful when planning subsequent sessions or units of the course. Although many classroom assessment activities can be done for credit, it is usually best to make these probes an ungraded activity.
Discovering that your students' background and preparation are at odds with your expectations can throw even the best-planned lesson or syllabus off-track. However, knowing is certainly better than not knowing. At the very least, such data help you guide students to the appropriate resources for any supplementary assistance they may need.
find this article here
Background knowledge probes can be used at the beginning of a course, at the start of a new unit or lesson, or prior to introducing an important new topic. Once collected and analyzed, the data can be extremely useful when planning subsequent sessions or units of the course. Although many classroom assessment activities can be done for credit, it is usually best to make these probes an ungraded activity.
Discovering that your students' background and preparation are at odds with your expectations can throw even the best-planned lesson or syllabus off-track. However, knowing is certainly better than not knowing. At the very least, such data help you guide students to the appropriate resources for any supplementary assistance they may need.
Developing Classroom Performance Assessments and Scoring Rubrics - Part II. ERIC Digest
Diposting oleh
LA
by Moskal, Barbara M
A difficulty that is faced in the use of performance assessments is determining how the students' responses will be scored. Scoring rubrics provide one mechanism for scoring student responses to a variety of different types of performance assessments. This two-part Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics.
This Digest addresses 1) Developing Scoring Rubrics, 2) Administering Performance Assessments and 3) Scoring, Interpreting and Using Results. Another Digest addresses Writing Goals and Objectives, and Developing Performance Assessments. These categories guide the reader through the four phases of the classroom
assessment process planning, gathering, interpreting and using (Moskal, 2000a). The
current article assumes that the reader has a basic knowledge of both performance
assessments and scoring rubrics.
DEVELOPING SCORING RUBRICS
Scoring rubrics are one method that may be used to evaluatestudents' responses to
performance assessments. Two typesof performance assessments are frequently
discussed in theliterature: analytic and holistic. Analytic scoring rubrics dividea
performance into separate facets and each facet is evaluatedusing a separate scale.
Holistic scoring rubrics use a single scaleto evaluate the larger process. In holistic
scoring rubrics, all ofthe facets that make-up the task are evaluated in combination.The recommendations that follow are appropriate to bothanalytic and holistic scoring rubrics.
Recommendations for developing scoring rubrics:
1. The criteria set forth within a scoring rubric should beclearly aligned with the
requirements of the task and the statedgoals and objectives. As was discussed earlier, a list can becompiled that describes how the elements of the task map intothe goals and objectives. This list can be extended to include how the criteria that is both analytic and holistic, is immediately available through this journal. Mertler (2001) and
Moskal (2000b) have both described the differences between analytic and holistic scoring rubrics and how to develop each type of rubric. Books have also been written or compiled (e.g., Arter & McTighe, 2001; Boston, 2002) that provide detailed examinations of the rubric development process and the different types of scoring rubrics.
ADMINISTERING PERFORMANCE ASSESSMENTS
Once a performance assessment and its accompanying scoring rubric are developed, it is time to administer the assessment to students. The recommendations that follow are specifically developed to guide the administration process.
Recommendations for administering performance assessments:
1. Both written and oral explanations of tasks should be clear and concise and
presented in language that the students understand. If the task is presented in written
form, then the reading level of the students should be given careful consideration.
Students should be given the opportunity to ask clarification questions before
completing the task.
2. Appropriate tools need to be available to support the completion of the assessment activity. Depending on the activity, students may need access to library resources, computer programs, laboratories, calculators, or other tools. Before the task is administered, the teacher should determine what tools will be needed and ensure that these tools are available during the task administration.
3. Scoring rubrics should be discussed with the students before they complete the
assessment activity. This allows the students to adjust their efforts in a manner that
maximizes their performance. Teachers are often concerned that by giving the students the criteria in advance, all of the students will perform at the top level. In practice, this rarely (if ever) occurs.
The first two recommendations provided above are appropriate well beyond the use of performance assessments and scoring rubrics. These recommendations are consistent with the Standards of the American Educational Research Association, American Psychological Association & National Council on Measurement in Education (1999) with respect to assessment and evaluation. The final recommendation is consistent with prior articles that concern the development of scoring rubrics (Brualdi, 1998; Moskal & Leydens, 2000).
SCORING, INTERPRETING AND USING RESULTS
As was discussed earlier, a scoring rubric may be used to score student responses to performance assessments. This section provides recommendations for scoring, interpreting and using the results of performance assessments.
Recommendations for scoring, interpreting and using results of performance
assessments:
1. Two independent raters should be able to acquire consistent scores using the
categories described in the scoring rubric. If the categories of the scoring rubric are
written clearly and concisely, then two raters should be able to score the same set of
papers and acquire similar results.
2. A given rater should be able to acquire consistent scores across time using the
scoring rubric. Knowledge of who a student is or the mood of a rater on a given day may impact the scoring process. Raters should frequently refer to the scoring rubric to ensure that they are not informally changing the criteria over time.
3. A set of anchor papers should be used to assist raters in the scoring process. Anchor papers are student papers that have been selected as examples of performances at the different levels of the scoring rubric. These papers provide a comparison set for raters as they score the student responses. Raters should frequently refer to these papers to ensure the consistency of scoring over time.
4. A set of anchor papers with students' names removed can be used to illustrate to
both students and parents the different levels of the scoring rubric. Ambiguities within the rubric can often be clarified through the use of examples. Anchor papers with students names removed can be used to clarify to both students and parents the expectations set forth through the scoring rubric.
5. The connection between the score or grade and the scoring rubric should be
immediately apparent. If an analytic rubric is used, then the report should contain the
scores for each analytic level. If a summary score or grade is provided, than an
explanation should be included as to how the summary score or grade was determined. Both students and parents should be able to understand how the final grade or score is linked to the scoring criteria.
6. The results of the performance assessment should be used to improve instruction
and the assessment process. What did the teacher learn from the student responses?
How can this be used to improve future classroom instruction? What did the teacher
learn about the performance assessment or the scoring rubric? How can these
instruments be improved for future instruction? The information that is acquired through classroom assessment should be actively used to improve future instruction and assessment.
The first three recommendations concern the important concept of "rater reliability" or the consistency between scores. Moskal and Leydens (2000) examine the concept of rater reliability in an article that was previously published in this journal. A more comprehensive source that addresses both validity and reliability of scoring rubrics is a book by Arter and McTighe (2001), Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance. The American Educational Research Association, American Psychological Association and National Council of Measurement in Education (1999) also address these issues in their Standards document. For information concerning methods for converting rubric scores to grades, see "Converting Rubric Scores to Letter Grades" (Northwest Regional Educational Laboratory, 2001).
CONCLUSIONS
The purpose of this article is to provide a set of recommendations for the development of performance assessments and scoring rubrics. These recommendations can be used to guide a teacher through the four phases of classroom assessment, planning, gathering, interpreting and using. Extensive literature is available on each phase of the assessment process and this article addresses only a small sample of that work. The reader is encouraged to use the previously cited work as a starting place to better understand the use of performance assessments and scoring rubrics in the classroom.
ACKNOWLEDGMENTS
This article was originally developed as part of a National Science Foundation (NSF) grant (EEC 0230702), Engineering Our World. The opinions and ideas expressed in this article are that of the author and not of the NSF.
ERIC Identifier: ED481715
Publication Date: 2003-06-00
Author: Moskal, Barbara M
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom." Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=6&n=2.
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom." Practical
Assessment, Research & Evaluation, 7(25). Available online:
http://ericae.net/pare/getvn.asp?v=7&n=25.
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation, 7(3) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=7&n=3.
Northwest Regional Educational Laboratory (2002). "Converting Rubric Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
http://ericae.net/pare/getvn.asp?v=2&n=2.
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass
Publishers.
A difficulty that is faced in the use of performance assessments is determining how the students' responses will be scored. Scoring rubrics provide one mechanism for scoring student responses to a variety of different types of performance assessments. This two-part Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics.
This Digest addresses 1) Developing Scoring Rubrics, 2) Administering Performance Assessments and 3) Scoring, Interpreting and Using Results. Another Digest addresses Writing Goals and Objectives, and Developing Performance Assessments. These categories guide the reader through the four phases of the classroom
assessment process planning, gathering, interpreting and using (Moskal, 2000a). The
current article assumes that the reader has a basic knowledge of both performance
assessments and scoring rubrics.
DEVELOPING SCORING RUBRICS
Scoring rubrics are one method that may be used to evaluatestudents' responses to
performance assessments. Two typesof performance assessments are frequently
discussed in theliterature: analytic and holistic. Analytic scoring rubrics dividea
performance into separate facets and each facet is evaluatedusing a separate scale.
Holistic scoring rubrics use a single scaleto evaluate the larger process. In holistic
scoring rubrics, all ofthe facets that make-up the task are evaluated in combination.The recommendations that follow are appropriate to bothanalytic and holistic scoring rubrics.
Recommendations for developing scoring rubrics:
1. The criteria set forth within a scoring rubric should beclearly aligned with the
requirements of the task and the statedgoals and objectives. As was discussed earlier, a list can becompiled that describes how the elements of the task map intothe goals and objectives. This list can be extended to include how the criteria that is both analytic and holistic, is immediately available through this journal. Mertler (2001) and
Moskal (2000b) have both described the differences between analytic and holistic scoring rubrics and how to develop each type of rubric. Books have also been written or compiled (e.g., Arter & McTighe, 2001; Boston, 2002) that provide detailed examinations of the rubric development process and the different types of scoring rubrics.
ADMINISTERING PERFORMANCE ASSESSMENTS
Once a performance assessment and its accompanying scoring rubric are developed, it is time to administer the assessment to students. The recommendations that follow are specifically developed to guide the administration process.
Recommendations for administering performance assessments:
1. Both written and oral explanations of tasks should be clear and concise and
presented in language that the students understand. If the task is presented in written
form, then the reading level of the students should be given careful consideration.
Students should be given the opportunity to ask clarification questions before
completing the task.
2. Appropriate tools need to be available to support the completion of the assessment activity. Depending on the activity, students may need access to library resources, computer programs, laboratories, calculators, or other tools. Before the task is administered, the teacher should determine what tools will be needed and ensure that these tools are available during the task administration.
3. Scoring rubrics should be discussed with the students before they complete the
assessment activity. This allows the students to adjust their efforts in a manner that
maximizes their performance. Teachers are often concerned that by giving the students the criteria in advance, all of the students will perform at the top level. In practice, this rarely (if ever) occurs.
The first two recommendations provided above are appropriate well beyond the use of performance assessments and scoring rubrics. These recommendations are consistent with the Standards of the American Educational Research Association, American Psychological Association & National Council on Measurement in Education (1999) with respect to assessment and evaluation. The final recommendation is consistent with prior articles that concern the development of scoring rubrics (Brualdi, 1998; Moskal & Leydens, 2000).
SCORING, INTERPRETING AND USING RESULTS
As was discussed earlier, a scoring rubric may be used to score student responses to performance assessments. This section provides recommendations for scoring, interpreting and using the results of performance assessments.
Recommendations for scoring, interpreting and using results of performance
assessments:
1. Two independent raters should be able to acquire consistent scores using the
categories described in the scoring rubric. If the categories of the scoring rubric are
written clearly and concisely, then two raters should be able to score the same set of
papers and acquire similar results.
2. A given rater should be able to acquire consistent scores across time using the
scoring rubric. Knowledge of who a student is or the mood of a rater on a given day may impact the scoring process. Raters should frequently refer to the scoring rubric to ensure that they are not informally changing the criteria over time.
3. A set of anchor papers should be used to assist raters in the scoring process. Anchor papers are student papers that have been selected as examples of performances at the different levels of the scoring rubric. These papers provide a comparison set for raters as they score the student responses. Raters should frequently refer to these papers to ensure the consistency of scoring over time.
4. A set of anchor papers with students' names removed can be used to illustrate to
both students and parents the different levels of the scoring rubric. Ambiguities within the rubric can often be clarified through the use of examples. Anchor papers with students names removed can be used to clarify to both students and parents the expectations set forth through the scoring rubric.
5. The connection between the score or grade and the scoring rubric should be
immediately apparent. If an analytic rubric is used, then the report should contain the
scores for each analytic level. If a summary score or grade is provided, than an
explanation should be included as to how the summary score or grade was determined. Both students and parents should be able to understand how the final grade or score is linked to the scoring criteria.
6. The results of the performance assessment should be used to improve instruction
and the assessment process. What did the teacher learn from the student responses?
How can this be used to improve future classroom instruction? What did the teacher
learn about the performance assessment or the scoring rubric? How can these
instruments be improved for future instruction? The information that is acquired through classroom assessment should be actively used to improve future instruction and assessment.
The first three recommendations concern the important concept of "rater reliability" or the consistency between scores. Moskal and Leydens (2000) examine the concept of rater reliability in an article that was previously published in this journal. A more comprehensive source that addresses both validity and reliability of scoring rubrics is a book by Arter and McTighe (2001), Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance. The American Educational Research Association, American Psychological Association and National Council of Measurement in Education (1999) also address these issues in their Standards document. For information concerning methods for converting rubric scores to grades, see "Converting Rubric Scores to Letter Grades" (Northwest Regional Educational Laboratory, 2001).
CONCLUSIONS
The purpose of this article is to provide a set of recommendations for the development of performance assessments and scoring rubrics. These recommendations can be used to guide a teacher through the four phases of classroom assessment, planning, gathering, interpreting and using. Extensive literature is available on each phase of the assessment process and this article addresses only a small sample of that work. The reader is encouraged to use the previously cited work as a starting place to better understand the use of performance assessments and scoring rubrics in the classroom.
ACKNOWLEDGMENTS
This article was originally developed as part of a National Science Foundation (NSF) grant (EEC 0230702), Engineering Our World. The opinions and ideas expressed in this article are that of the author and not of the NSF.
ERIC Identifier: ED481715
Publication Date: 2003-06-00
Author: Moskal, Barbara M
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom." Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=6&n=2.
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom." Practical
Assessment, Research & Evaluation, 7(25). Available online:
http://ericae.net/pare/getvn.asp?v=7&n=25.
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation, 7(3) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=7&n=3.
Northwest Regional Educational Laboratory (2002). "Converting Rubric Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
http://ericae.net/pare/getvn.asp?v=2&n=2.
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass
Publishers.
Developing Classroom Performance Assessments and Scoring Rubrics - Part I. ERIC Digest.
Diposting oleh
LA
by Moskal, Barbara M
A difficulty that is faced in the use of performance assessments is determining how the students' responses will be scored. Scoring rubrics provide one mechanism for scoring student responses to a variety of different types of performance assessments. This two-part Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics.
The suggestions are divided into five categories:
1) Writing Goals and Objectives,
2) Developing Performance Assessments,
3) Developing Scoring Rubrics,
4) Administering Performance Assessments and
5) Scoring, Interpreting and Using Results.
"This Digest addresses the first two categories. Another Digest addresses the last
three."
These categories guide the reader through the four phases of the classroom
assessment process planning, gathering, interpreting and using (Moskal, 2000a). The
list of suggestions provided throughout this paper are specific to formal assessment
activities as opposed to informal assessment activities (Stiggins, 1994). Formal
assessment activities refer to activities in which the students are aware that they are
being evaluated; Informal assessment activities refer to activities in which the students are not aware that they are being evaluated (Stiggins, 1994). Although some of these suggestions are appropriate for informal assessments, the primary focus of this paper is upon formal assessment activities.
The current article assumes that the reader has a basic knowledge of both performance assessments and scoring rubrics. If these assumptions are incorrect, the reader may wish to review prior articles on performance assessments and scoring rubrics before reading this article. Brualdi 's article (1998), "Implementing performance assessment in the classroom", provides an introduction to performance assessments and how they may be used in the classroom. Moskal (2000b) discusses the basics of scoring rubric development in her article, "Scoring Rubrics: What, When and How?" In the article "Designing scoring rubrics for your classroom," Mertler (2001) outlines how to develop and implement scoring rubrics in the classroom.
WRITING GOALS AND OBJECTIVES
Before a performance assessment or a scoring rubric is written or selected, the teacher should clearly identify the purpose of the activity. As is the case with any assessment, a clear statement of goals and objectives should be written to guide the development of both the performance assessment and the scoring rubric. "Goals" are broad statements of expected student outcomes and "objectives" divide the goals into observable behaviors (Rogers & Sando, 1996). Questions such as, "What do I hope to learn about my students' knowledge or skills?," "What content, skills and knowledge should the activity be designed to assess?," and "What evidence do I need to evaluate the appropriate skills and knowledge?", can help in the identification of specific goals and objectives.
Recommendations for writing goals and objectives:
1. The statement of goals and accompanying objectives should provide a clear focus for both instruction and assessment. Another manner in which to phrase this
recommendation is that the stated goals and objectives for the performance
assessment should be clearly aligned with the goals and objectives of instruction.
Ideally, a statement of goals and objectives is developed prior to the instructional
activity and is used to guide both instruction and assessment.
2. Both goals and objectives should reflect knowledge and information that is worthwhile for students to learn. Both the instruction and the assessment of student learning are intentional acts and should be guided through planning. Goals and objectives provide a framework for the development of this plan. Given the critical relationship between goals and objectives and instruction and assessment, goals and objectives should reflect important learning outcomes.
3. The relationship between a given goal and the objectives that describe that goal
should be apparent. Objectives lay the framework upon which a given goal is evaluated.
Therefore, there should be a clear link between the statement of the goal and the
objectives that define that goal.
4. All of the important aspects of the given goal should be reflected through the
objectives. Once again, goals and objectives provide a framework for evaluating the
attainment of a given goal. Therefore, the accompanying set of objectives should reflect the important aspects of the goal.
5. Objectives should describe measurable student outcomes. Since objectives provide the framework for evaluation, they need to be phrased in a manner that specifies the student behavior that will demonstrate the attainment of the larger goal.
6. Goals and objectives should be used to guide the selection of an appropriate
assessment activity. When the goals and objectives are focused upon the recall of
factual knowledge, a multiple choice or short response assessment may be more
appropriate and efficient than a performance assessment. When the goals and
objectives are focused upon complex learning outcomes, such as reasoning,
communication, teamwork, etc., a performance assessment is likely to be appropriate (Perlman, 2002).
Writing goals and objectives, at first, appears to be a simple. After all, this process
primarily requires clearly defining the desired student outcomes. Many teachers initially have difficulty creating goals and objectives that can be used to guide instruction and that can be measured. An excellent resource that specifically focuses upon the "how to" of writing measurable objectives is a book by Gronlund (2000). Other authors have also addressed these issues in subsections of l arger works (e.g., Airasian, 2000; 2001; Oosterhoff, 1999).
DEVELOPING PERFORMANCE ASSESSMENT
As the term suggests, performance assessments require a demonstration of students'
skills or knowledge (Airasian, 2000; 2001; Brualdi, 1998; Perlman, 2002).
Performance assessments can take on many different forms, which include written and oral demonstrations and activities that can be completed by either a group or an individual.
A factor that distinguishes performance assessments from other extended response
activities is that they require students to demonstrate the application of knowledge to a particular context (Brualdi, 1998; Wiggins, 1993). Through observation or analysis of a student's response, the teacher can determine what the student knows, what the student does not know and what misconceptions the student holds with respect to the purpose of the assessment.
Recommendations for developing performance assessments:
1. The selected performance should reflect a valued activity. According to Wiggins
(1990), "The best tests always teach students and teachers alike the kind of work that most matters; they are enabling and forward-looking, not just reflective of prior teaching." He suggests the use of tasks that resemble the type of activities that are known to take place in the workforce (e.g., project reports and presentations, writing legal briefs, collecting, analyzing and using data to make and justify decisions). In other words, performance assessments allow students the opportunity to display their skills and knowledge in response to "real" situations (Airasian, 2000; 2001; Wiggins, 1993).
2. The completion of performance assessments should provide a valuable learning
experience. Performance assessments require more time to administer than do other
forms of assessment. The investment of this classroom time should result in a higher
payoff. This payoff should include both an increase in the teacher's understanding of
what students know and can do and an increase in the students' knowledge of the
intended content and constructs.
3. The statement of goals and objectives should be clearly aligned with the measurable outcomes of the performance activity. Once the task has been selected, a list can be made of how the elements of the task map into the desired goals and objectives. If it is not apparent as to how the students' performance will be mapped into the desired goals and objectives, then adjustments may need to be made to the task or a new task may need to be selected.
4. The task should not examine extraneous or unintended variables. Examine the task and think about whether there are elements of the task that do not map directly into the goals and objectives. Is knowledge required in the completion of the task that is
inconsistent with the purpose? Will lack of this knowledge interfere or prevent the
students from completing the task for reasons that are not consistent with the task's
purpose? If such factors exist, changes may need to be made to the task or a new task may need to be selected.
5. Performance assessments should be fair and free from bias. The phrasing of the task should be carefully constructed in a manner that eliminates gender and ethnic
stereotypes. Additionally, the task should not give an unfair advantage to a particular
subset of students. For example, a task that is heavily weighted with baseball statistics may give an unfair advantage to the students that are baseball enthusiasts.
The recommendations provided above have been drawn from the broader literary base concerning the construction of performance assessments. The interested reader can acquire further details concerning the development process by consulting other articles that are available through this journal (i.e., Brualdi, 1998; Roeber, 1996; Wiggins, 1990) or books (e.g., Wiggins, 1993; 1998) that address this subject.
ERIC Identifier: ED481714
Publication Date: 2003-06-00
Author: Moskal, Barbara M
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom." Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=6&n=2.
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom." Practical
Assessment, Research & Evaluation, 7(25). Available online:
http://ericae.net/pare/getvn.asp?v=7&n=25.
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation, 7(3) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=7&n=3.
Northwest Regional Educational Laboratory (2002). "Converting Rubric Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Rudner, L.M. & Schafer, W.D. (Eds.). (2002). What Teachers Need to Know about Assessment. Washington, DC: National Education Association.
Stiggins, R. (1994). Student-Centered Classroom Assessment. New York: Macmillan Publishing Company.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
http://ericae.net/pare/getvn.asp?v=2&n=2.
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass
Publishers.
A difficulty that is faced in the use of performance assessments is determining how the students' responses will be scored. Scoring rubrics provide one mechanism for scoring student responses to a variety of different types of performance assessments. This two-part Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics.
The suggestions are divided into five categories:
1) Writing Goals and Objectives,
2) Developing Performance Assessments,
3) Developing Scoring Rubrics,
4) Administering Performance Assessments and
5) Scoring, Interpreting and Using Results.
"This Digest addresses the first two categories. Another Digest addresses the last
three."
These categories guide the reader through the four phases of the classroom
assessment process planning, gathering, interpreting and using (Moskal, 2000a). The
list of suggestions provided throughout this paper are specific to formal assessment
activities as opposed to informal assessment activities (Stiggins, 1994). Formal
assessment activities refer to activities in which the students are aware that they are
being evaluated; Informal assessment activities refer to activities in which the students are not aware that they are being evaluated (Stiggins, 1994). Although some of these suggestions are appropriate for informal assessments, the primary focus of this paper is upon formal assessment activities.
The current article assumes that the reader has a basic knowledge of both performance assessments and scoring rubrics. If these assumptions are incorrect, the reader may wish to review prior articles on performance assessments and scoring rubrics before reading this article. Brualdi 's article (1998), "Implementing performance assessment in the classroom", provides an introduction to performance assessments and how they may be used in the classroom. Moskal (2000b) discusses the basics of scoring rubric development in her article, "Scoring Rubrics: What, When and How?" In the article "Designing scoring rubrics for your classroom," Mertler (2001) outlines how to develop and implement scoring rubrics in the classroom.
WRITING GOALS AND OBJECTIVES
Before a performance assessment or a scoring rubric is written or selected, the teacher should clearly identify the purpose of the activity. As is the case with any assessment, a clear statement of goals and objectives should be written to guide the development of both the performance assessment and the scoring rubric. "Goals" are broad statements of expected student outcomes and "objectives" divide the goals into observable behaviors (Rogers & Sando, 1996). Questions such as, "What do I hope to learn about my students' knowledge or skills?," "What content, skills and knowledge should the activity be designed to assess?," and "What evidence do I need to evaluate the appropriate skills and knowledge?", can help in the identification of specific goals and objectives.
Recommendations for writing goals and objectives:
1. The statement of goals and accompanying objectives should provide a clear focus for both instruction and assessment. Another manner in which to phrase this
recommendation is that the stated goals and objectives for the performance
assessment should be clearly aligned with the goals and objectives of instruction.
Ideally, a statement of goals and objectives is developed prior to the instructional
activity and is used to guide both instruction and assessment.
2. Both goals and objectives should reflect knowledge and information that is worthwhile for students to learn. Both the instruction and the assessment of student learning are intentional acts and should be guided through planning. Goals and objectives provide a framework for the development of this plan. Given the critical relationship between goals and objectives and instruction and assessment, goals and objectives should reflect important learning outcomes.
3. The relationship between a given goal and the objectives that describe that goal
should be apparent. Objectives lay the framework upon which a given goal is evaluated.
Therefore, there should be a clear link between the statement of the goal and the
objectives that define that goal.
4. All of the important aspects of the given goal should be reflected through the
objectives. Once again, goals and objectives provide a framework for evaluating the
attainment of a given goal. Therefore, the accompanying set of objectives should reflect the important aspects of the goal.
5. Objectives should describe measurable student outcomes. Since objectives provide the framework for evaluation, they need to be phrased in a manner that specifies the student behavior that will demonstrate the attainment of the larger goal.
6. Goals and objectives should be used to guide the selection of an appropriate
assessment activity. When the goals and objectives are focused upon the recall of
factual knowledge, a multiple choice or short response assessment may be more
appropriate and efficient than a performance assessment. When the goals and
objectives are focused upon complex learning outcomes, such as reasoning,
communication, teamwork, etc., a performance assessment is likely to be appropriate (Perlman, 2002).
Writing goals and objectives, at first, appears to be a simple. After all, this process
primarily requires clearly defining the desired student outcomes. Many teachers initially have difficulty creating goals and objectives that can be used to guide instruction and that can be measured. An excellent resource that specifically focuses upon the "how to" of writing measurable objectives is a book by Gronlund (2000). Other authors have also addressed these issues in subsections of l arger works (e.g., Airasian, 2000; 2001; Oosterhoff, 1999).
DEVELOPING PERFORMANCE ASSESSMENT
As the term suggests, performance assessments require a demonstration of students'
skills or knowledge (Airasian, 2000; 2001; Brualdi, 1998; Perlman, 2002).
Performance assessments can take on many different forms, which include written and oral demonstrations and activities that can be completed by either a group or an individual.
A factor that distinguishes performance assessments from other extended response
activities is that they require students to demonstrate the application of knowledge to a particular context (Brualdi, 1998; Wiggins, 1993). Through observation or analysis of a student's response, the teacher can determine what the student knows, what the student does not know and what misconceptions the student holds with respect to the purpose of the assessment.
Recommendations for developing performance assessments:
1. The selected performance should reflect a valued activity. According to Wiggins
(1990), "The best tests always teach students and teachers alike the kind of work that most matters; they are enabling and forward-looking, not just reflective of prior teaching." He suggests the use of tasks that resemble the type of activities that are known to take place in the workforce (e.g., project reports and presentations, writing legal briefs, collecting, analyzing and using data to make and justify decisions). In other words, performance assessments allow students the opportunity to display their skills and knowledge in response to "real" situations (Airasian, 2000; 2001; Wiggins, 1993).
2. The completion of performance assessments should provide a valuable learning
experience. Performance assessments require more time to administer than do other
forms of assessment. The investment of this classroom time should result in a higher
payoff. This payoff should include both an increase in the teacher's understanding of
what students know and can do and an increase in the students' knowledge of the
intended content and constructs.
3. The statement of goals and objectives should be clearly aligned with the measurable outcomes of the performance activity. Once the task has been selected, a list can be made of how the elements of the task map into the desired goals and objectives. If it is not apparent as to how the students' performance will be mapped into the desired goals and objectives, then adjustments may need to be made to the task or a new task may need to be selected.
4. The task should not examine extraneous or unintended variables. Examine the task and think about whether there are elements of the task that do not map directly into the goals and objectives. Is knowledge required in the completion of the task that is
inconsistent with the purpose? Will lack of this knowledge interfere or prevent the
students from completing the task for reasons that are not consistent with the task's
purpose? If such factors exist, changes may need to be made to the task or a new task may need to be selected.
5. Performance assessments should be fair and free from bias. The phrasing of the task should be carefully constructed in a manner that eliminates gender and ethnic
stereotypes. Additionally, the task should not give an unfair advantage to a particular
subset of students. For example, a task that is heavily weighted with baseball statistics may give an unfair advantage to the students that are baseball enthusiasts.
The recommendations provided above have been drawn from the broader literary base concerning the construction of performance assessments. The interested reader can acquire further details concerning the development process by consulting other articles that are available through this journal (i.e., Brualdi, 1998; Roeber, 1996; Wiggins, 1990) or books (e.g., Wiggins, 1993; 1998) that address this subject.
ERIC Identifier: ED481714
Publication Date: 2003-06-00
Author: Moskal, Barbara M
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom." Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=6&n=2.
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom." Practical
Assessment, Research & Evaluation, 7(25). Available online:
http://ericae.net/pare/getvn.asp?v=7&n=25.
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation, 7(3) [On-line]. Available:
http://ericae.net/pare/getvn.asp?v=7&n=3.
Northwest Regional Educational Laboratory (2002). "Converting Rubric Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Rudner, L.M. & Schafer, W.D. (Eds.). (2002). What Teachers Need to Know about Assessment. Washington, DC: National Education Association.
Stiggins, R. (1994). Student-Centered Classroom Assessment. New York: Macmillan Publishing Company.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
http://ericae.net/pare/getvn.asp?v=2&n=2.
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass
Publishers.
Cognitive Science and Assessment. ERIC Digest.
Diposting oleh
LA
by Boston, Carol
Cognitive science is devoted to the study of how people think and learn and how, when, and whether they use what they know to solve problems (Greeno, Collins, & Resnick, 1997; National Research Council, 2001). The cognitive perspective in education encompasses how learners develop and structure their knowledge in specific subject areas and how assessment tasks might be designed to enable students to demonstrate the knowledge and cognitive processes necessary to be judged proficient in these subject areas. This Digest provides educators with an overview of some important facets of cognitive science research and suggests implications for classroom assessment.
HOW DO EXPERTS AND NOVICES DIFFER IN THEIR APPROACH TO PROBLEMS?
Education researchers study the thinking of experts in various subject areas to gain an understanding of what concepts and procedures are most important to teach and how they are interrelated. The concept is that educators can and should be moving students along a continuum toward real-world subject mastery based on a deep understanding of how subject knowledge is organized (Bereiter & Scardamalia, 1986).
When faced with a problem, learners tend to search their memories for a schema, or learned technique for organizing and interpreting information in a certain subject, in
order to solve it (Rumelhart, 1980). Over time, individuals build mental models to guide their problem solving efficiently so they do not depend on trial-and-error approaches and can instead create analogies and make inferences to support new learning (Glaser & Baxter, 1999).
When compared with novice learners, experts in a subject are notable for how
well-organized their knowledge is, which in turn enables them to see patterns quickly, recall information, and study novel problems in light of concepts and principles they know already (Glaser & Chi, 1988). In other words, their schemas are well-connected and they are able to retrieve chunks of information relevant to a task at hand. Experts also have strong problem-solving skills. They know what they know and what they don'tknow, and plan and monitor the implementation of various mental strategies (Hatano, 1990).
COGNITIVE SCIENCE IN THE CLASSROOM
Ideally, developmental models of learning could be created that note the typical
progression and milestones as a learner advances from novice to competent to expert and describe the types of experiences that lead to change. For example, students generally have naive or intuitive understandings of the sciences, based in part on misconceptions that are corrected as they are exposed to new learning (e.g., Gabel, 1994, Feldman & Minstrell, 2000). And while there are individual differences among learners, when large samples are studied, patterns tend to emerge, particularly related to erroneous beliefs and incorrect procedures. For example, there appear to be a certain limited number of "subtraction bugs" that account for almost all of the ways young children make mistakes when learning to subtract two- or three-digit numbers, and these are constant even across languages (Brown and Burton, 1978).
Allowing for variations among learners, it is possible to discover the most common
pathways toward acquiring knowledge and use this information diagnostically. For
example, Case, Griffin, and colleagues have developed an assessment tool based on their empirical research regarding how children from ages 4 to 10 change in their
conception of numbers through growth and practice. While 4-year-olds can count
groups of objects, they have to guess if they face a theoretical question such as, "Which is more--four or five?" Between 4 and 6, most children develop a "mental number line" that helps them envision the answer to such a question, even when actual objects aren't present. Between 6 and 8, children gradually come to envision other number lines for counting by 2s, 5s, 10s, and 100s. By 10, many children have a better understanding of the base-10 number system, which enables them to reach a more sophisticated understanding of concepts such as regrouping and estimation (Case, 1996; Griffin and Case, 1997). Teachers can use assessments based on this research to determine their next steps in arithmetic instruction.
More research has been done about domain structure in some disciplines than in
others. Mathematics, physics, beginning reading, and U.S. history are among the areas that have been studied (see, for example, Niemi, 1996, and Wineburg, 1996).
Subject-area standards such as the National Council of Teachers of Mathematics
Standards generally reflect current thinking on cognitive processes and are a good
place for teachers to begin their explorations of this topic. The National Research
Council's How People Learn: Brain, Mind, Experience, and School
(http://stills.nap.edu/html/howpeople1/) provides another helpful introduction.
HOW DO LEARNERS STORE AND ACCESS KNOWLEDGE?
Memory may be divided into two types: short-term, or working memory, which
determines how much mental processing can go on at any one time, and long-term
memory, where people organize their content knowledge. Short-term memory, or working memory, is connected with fluid intelligence, or the ability to solve new and
unusual problems, while long-term memory is connected to crystallized intelligence, or the bringing of past experience to bear on current problems (Anderson, Greeno, Reder, and Simon, 2000). When students are learning a new skill, they must rely heavily on their working memory to represent the task and may need to talk themselves through a task. As the skill moves into long-term memory, it becomes fluent, and eventually, automatic (Anderson, 1982).
To support the learning process, students can be taught meta-cognitive skills, or
techniques to reflect on and assess their own thinking. To improve reading
comprehension, for example, young children can be taught to monitor their
understanding of passages by asking questions, summarizing, clarifying any
uncertainties, and predicting next events (Palincsar & Brown, 1984).
HOW CAN ASSESSMENT DESIGNERS USE FINDINGS FROM COGNITIVE SCIENCE?
The design of any assessment should begin with a statement of purpose for the
assessment and a definition of the particular subject area or content domain. How do people demonstrate knowledge and become competent in this domain? What important aspects of learning do we want to draw inferences from when measuring student achievement in a given subject area? What situations and tasks can we observe to make the appropriate inferences?
Cognitive science calls for test developers to:
* Work from a deep knowledge of the central concepts and principles of a given subject area, and the most important related information.
* Identify or develop those tasks that allow students to demonstrate their
understanding and skills in these areas, as opposed to rote memorization.
* Make sure tasks or questions are sufficiently complex to get at how students have organized their knowledge and how and when they use it.
* Emphasize the contents of long-term memory rather than short-term, or working, memory by not burdening test-takers withrequirements to track a large number of response options or major quantities of extraneous information while answering a question.
* Emphasize relevant constructs--for example, a mathematics assessment should not over-emphasize reading and writing, unless communicating about mathematics is the skill to be measured.
* Not limit choice of item format. Both multiple-choice and
performance-based assessments have the potential to be effective or ineffective.
Carefully constructed multiple-choice questions can tap complex cognitive processes, not just lower level skills, as traditionally believed. And performance assessments, though generally praised for capturing higher level skills, may inadvertently focus on lower level skills (Baxter & Glaser, 1998; Hamilton, Nussbaum, and Snow, 1997; Linn, Baker, & Dunbar, 1991).
* Regard task difficulty in terms of underlying knowledge of cognitive processes required, rather than statistical information such as how many respondents answered correctly.
At the classroom assessment level, cognitive science findings encourage teachers to:
* Teach learners how and when to apply various approaches and procedures.
* Teach meta-cognitive skills within content areas so learners become capable of directing their thinking and reflecting on their progress.
* Observe students as they solve problems.
* Have students think aloud as they work or describe the reasoning that leads them to a particular solution.
* Analyze student errors on assignments or tests to determine which students got a question or problem wrong and why it appeared difficult for them. Knowing the source of difficulty can lead to more targeted, effective remediation.
Teachers should also be aware that acquiring important knowledge and skills at an
in-depth level takes a significant amount of time, practice, and feedback.
ERIC Identifier: ED481716
Publication Date: 2003
Author: Boston, Carol
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Anderson, J. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406.
Anderson, J., Greeno, J., Reder, L., and Simon, H.A. (2000). Perspectives on learning, thinking, and activity. Educational Researcher, 229 (4): 11-13.
Baxter, G. and Glaser, R. (1998). Investigating the cognitive complexity of science
assessments. Educational Measurement: Issues and Practices, 17 (3): 37-45.
Bereiter, C. & Scardamalia, M.(1986). Educational relevance in the study of expertise. Interchange, 17 (2): 10-19.
Brown, J.S. and Burton, R.R. (1978). Diagnostic models for procedural bugs in basic
mathematical skills. Cognitive Science, 2, 155-192.
Case, R. (1996). Introduction - Reconceptualizing the development of children's
conceptual structures and their development in middle childhood. Monographs of the Society for Research in Child Development, 61 (1-2): 1-26.
Feldman, A., & Minstrell, J. (2000). Action research as a research methodology for the study of the teaching and learning of science. In E. Kelly & R. Lesh (Eds.), Handbook of Research Design in Mathematics and Science Education. Mahwah, NJ: Erlbaum.
Gabel, D., ed. (1994). Handbook of Research on Science Teaching and Learning. New York: Macmillan.
Glaser, R. and Baxter, G. (1999). Assessing active knowledge. Paper presented at the 1999 CRESST Conference, Benchmarks for Accountability: Are We There Yet? UCLA, Los Angeles.
Glaser, R. and Chi, M. (1988). Overview in M. Chi, R. Glaser, & M. Farr (Eds.), The Nature of Expertise (pp. xv-xxvii). Hillsdale, NJ: Erlbaum.
Greeno, J.G., Collins, A.M., & Resnick, L.B. (1997). Cognition and learning. In D.
Berliner & R. Calfee (Eds.), Handbook of Educational Psychology (pp. 15-47). New York: Simon & Schuster Macmillan.
Griffin, S., and Case, R. (1997). Re-thinking the primary school math curriculum: An approach based on cognitive science. Issues in Education, 3, 1-65.
Hamilton, L., Nussbaum, E., & Snow, R. (1997). Interview procedures for validating science assessments. Applied Measurement in Education, 10, 181-200.
Hatano, G. (1990). The nature of everyday science: A brief introduction. British Journal of Developmental Psychology, 8, 245-250.
Linn, R., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20 (8):15-21.
National Research Council (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press.
Niemi, D. (1996). Assessing conceptual understanding in mathematics:
Representations, problem solutions, justifications, and explanations. Journal of
Educational Research, 89, 351-363.
Palinscar, A. and Brown, A. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117-175.
Rumelhart, D. A. (1980). Schemata: The building blocks of cognition. In R. Spiro, B. Bruce, & W. Brewer (Eds). Theoretical Issues in Reading Comprehension (pp. 33-58). Hillsdale, NJ: Erlbaum.
Wineburg, S. S. (1996). The psychology of learning and teaching history. In D. Berliner & R. Calfee (Eds.), Handbook of Educational Psychology (pp. 423-437). New York: Simon & Schuster Macmillan.
Cognitive science is devoted to the study of how people think and learn and how, when, and whether they use what they know to solve problems (Greeno, Collins, & Resnick, 1997; National Research Council, 2001). The cognitive perspective in education encompasses how learners develop and structure their knowledge in specific subject areas and how assessment tasks might be designed to enable students to demonstrate the knowledge and cognitive processes necessary to be judged proficient in these subject areas. This Digest provides educators with an overview of some important facets of cognitive science research and suggests implications for classroom assessment.
HOW DO EXPERTS AND NOVICES DIFFER IN THEIR APPROACH TO PROBLEMS?
Education researchers study the thinking of experts in various subject areas to gain an understanding of what concepts and procedures are most important to teach and how they are interrelated. The concept is that educators can and should be moving students along a continuum toward real-world subject mastery based on a deep understanding of how subject knowledge is organized (Bereiter & Scardamalia, 1986).
When faced with a problem, learners tend to search their memories for a schema, or learned technique for organizing and interpreting information in a certain subject, in
order to solve it (Rumelhart, 1980). Over time, individuals build mental models to guide their problem solving efficiently so they do not depend on trial-and-error approaches and can instead create analogies and make inferences to support new learning (Glaser & Baxter, 1999).
When compared with novice learners, experts in a subject are notable for how
well-organized their knowledge is, which in turn enables them to see patterns quickly, recall information, and study novel problems in light of concepts and principles they know already (Glaser & Chi, 1988). In other words, their schemas are well-connected and they are able to retrieve chunks of information relevant to a task at hand. Experts also have strong problem-solving skills. They know what they know and what they don'tknow, and plan and monitor the implementation of various mental strategies (Hatano, 1990).
COGNITIVE SCIENCE IN THE CLASSROOM
Ideally, developmental models of learning could be created that note the typical
progression and milestones as a learner advances from novice to competent to expert and describe the types of experiences that lead to change. For example, students generally have naive or intuitive understandings of the sciences, based in part on misconceptions that are corrected as they are exposed to new learning (e.g., Gabel, 1994, Feldman & Minstrell, 2000). And while there are individual differences among learners, when large samples are studied, patterns tend to emerge, particularly related to erroneous beliefs and incorrect procedures. For example, there appear to be a certain limited number of "subtraction bugs" that account for almost all of the ways young children make mistakes when learning to subtract two- or three-digit numbers, and these are constant even across languages (Brown and Burton, 1978).
Allowing for variations among learners, it is possible to discover the most common
pathways toward acquiring knowledge and use this information diagnostically. For
example, Case, Griffin, and colleagues have developed an assessment tool based on their empirical research regarding how children from ages 4 to 10 change in their
conception of numbers through growth and practice. While 4-year-olds can count
groups of objects, they have to guess if they face a theoretical question such as, "Which is more--four or five?" Between 4 and 6, most children develop a "mental number line" that helps them envision the answer to such a question, even when actual objects aren't present. Between 6 and 8, children gradually come to envision other number lines for counting by 2s, 5s, 10s, and 100s. By 10, many children have a better understanding of the base-10 number system, which enables them to reach a more sophisticated understanding of concepts such as regrouping and estimation (Case, 1996; Griffin and Case, 1997). Teachers can use assessments based on this research to determine their next steps in arithmetic instruction.
More research has been done about domain structure in some disciplines than in
others. Mathematics, physics, beginning reading, and U.S. history are among the areas that have been studied (see, for example, Niemi, 1996, and Wineburg, 1996).
Subject-area standards such as the National Council of Teachers of Mathematics
Standards generally reflect current thinking on cognitive processes and are a good
place for teachers to begin their explorations of this topic. The National Research
Council's How People Learn: Brain, Mind, Experience, and School
(http://stills.nap.edu/html/howpeople1/) provides another helpful introduction.
HOW DO LEARNERS STORE AND ACCESS KNOWLEDGE?
Memory may be divided into two types: short-term, or working memory, which
determines how much mental processing can go on at any one time, and long-term
memory, where people organize their content knowledge. Short-term memory, or working memory, is connected with fluid intelligence, or the ability to solve new and
unusual problems, while long-term memory is connected to crystallized intelligence, or the bringing of past experience to bear on current problems (Anderson, Greeno, Reder, and Simon, 2000). When students are learning a new skill, they must rely heavily on their working memory to represent the task and may need to talk themselves through a task. As the skill moves into long-term memory, it becomes fluent, and eventually, automatic (Anderson, 1982).
To support the learning process, students can be taught meta-cognitive skills, or
techniques to reflect on and assess their own thinking. To improve reading
comprehension, for example, young children can be taught to monitor their
understanding of passages by asking questions, summarizing, clarifying any
uncertainties, and predicting next events (Palincsar & Brown, 1984).
HOW CAN ASSESSMENT DESIGNERS USE FINDINGS FROM COGNITIVE SCIENCE?
The design of any assessment should begin with a statement of purpose for the
assessment and a definition of the particular subject area or content domain. How do people demonstrate knowledge and become competent in this domain? What important aspects of learning do we want to draw inferences from when measuring student achievement in a given subject area? What situations and tasks can we observe to make the appropriate inferences?
Cognitive science calls for test developers to:
* Work from a deep knowledge of the central concepts and principles of a given subject area, and the most important related information.
* Identify or develop those tasks that allow students to demonstrate their
understanding and skills in these areas, as opposed to rote memorization.
* Make sure tasks or questions are sufficiently complex to get at how students have organized their knowledge and how and when they use it.
* Emphasize the contents of long-term memory rather than short-term, or working, memory by not burdening test-takers withrequirements to track a large number of response options or major quantities of extraneous information while answering a question.
* Emphasize relevant constructs--for example, a mathematics assessment should not over-emphasize reading and writing, unless communicating about mathematics is the skill to be measured.
* Not limit choice of item format. Both multiple-choice and
performance-based assessments have the potential to be effective or ineffective.
Carefully constructed multiple-choice questions can tap complex cognitive processes, not just lower level skills, as traditionally believed. And performance assessments, though generally praised for capturing higher level skills, may inadvertently focus on lower level skills (Baxter & Glaser, 1998; Hamilton, Nussbaum, and Snow, 1997; Linn, Baker, & Dunbar, 1991).
* Regard task difficulty in terms of underlying knowledge of cognitive processes required, rather than statistical information such as how many respondents answered correctly.
At the classroom assessment level, cognitive science findings encourage teachers to:
* Teach learners how and when to apply various approaches and procedures.
* Teach meta-cognitive skills within content areas so learners become capable of directing their thinking and reflecting on their progress.
* Observe students as they solve problems.
* Have students think aloud as they work or describe the reasoning that leads them to a particular solution.
* Analyze student errors on assignments or tests to determine which students got a question or problem wrong and why it appeared difficult for them. Knowing the source of difficulty can lead to more targeted, effective remediation.
Teachers should also be aware that acquiring important knowledge and skills at an
in-depth level takes a significant amount of time, practice, and feedback.
ERIC Identifier: ED481716
Publication Date: 2003
Author: Boston, Carol
Source: ERIC Clearinghouse on Assessment and Evaluation
REFERENCES
Anderson, J. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406.
Anderson, J., Greeno, J., Reder, L., and Simon, H.A. (2000). Perspectives on learning, thinking, and activity. Educational Researcher, 229 (4): 11-13.
Baxter, G. and Glaser, R. (1998). Investigating the cognitive complexity of science
assessments. Educational Measurement: Issues and Practices, 17 (3): 37-45.
Bereiter, C. & Scardamalia, M.(1986). Educational relevance in the study of expertise. Interchange, 17 (2): 10-19.
Brown, J.S. and Burton, R.R. (1978). Diagnostic models for procedural bugs in basic
mathematical skills. Cognitive Science, 2, 155-192.
Case, R. (1996). Introduction - Reconceptualizing the development of children's
conceptual structures and their development in middle childhood. Monographs of the Society for Research in Child Development, 61 (1-2): 1-26.
Feldman, A., & Minstrell, J. (2000). Action research as a research methodology for the study of the teaching and learning of science. In E. Kelly & R. Lesh (Eds.), Handbook of Research Design in Mathematics and Science Education. Mahwah, NJ: Erlbaum.
Gabel, D., ed. (1994). Handbook of Research on Science Teaching and Learning. New York: Macmillan.
Glaser, R. and Baxter, G. (1999). Assessing active knowledge. Paper presented at the 1999 CRESST Conference, Benchmarks for Accountability: Are We There Yet? UCLA, Los Angeles.
Glaser, R. and Chi, M. (1988). Overview in M. Chi, R. Glaser, & M. Farr (Eds.), The Nature of Expertise (pp. xv-xxvii). Hillsdale, NJ: Erlbaum.
Greeno, J.G., Collins, A.M., & Resnick, L.B. (1997). Cognition and learning. In D.
Berliner & R. Calfee (Eds.), Handbook of Educational Psychology (pp. 15-47). New York: Simon & Schuster Macmillan.
Griffin, S., and Case, R. (1997). Re-thinking the primary school math curriculum: An approach based on cognitive science. Issues in Education, 3, 1-65.
Hamilton, L., Nussbaum, E., & Snow, R. (1997). Interview procedures for validating science assessments. Applied Measurement in Education, 10, 181-200.
Hatano, G. (1990). The nature of everyday science: A brief introduction. British Journal of Developmental Psychology, 8, 245-250.
Linn, R., Baker, E., & Dunbar, S. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20 (8):15-21.
National Research Council (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press.
Niemi, D. (1996). Assessing conceptual understanding in mathematics:
Representations, problem solutions, justifications, and explanations. Journal of
Educational Research, 89, 351-363.
Palinscar, A. and Brown, A. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117-175.
Rumelhart, D. A. (1980). Schemata: The building blocks of cognition. In R. Spiro, B. Bruce, & W. Brewer (Eds). Theoretical Issues in Reading Comprehension (pp. 33-58). Hillsdale, NJ: Erlbaum.
Wineburg, S. S. (1996). The psychology of learning and teaching history. In D. Berliner & R. Calfee (Eds.), Handbook of Educational Psychology (pp. 423-437). New York: Simon & Schuster Macmillan.
Langganan:
Postingan (Atom)