>
|
|
From the ERIC database
Creating Meaningful Performance Assessments. ERIC Digest E531.Performance assessment is a viable alternative to norm-referenced tests. Teachers can use performance assessment to obtain a much richer and more complete picture of what students know and are able to do.
DEFINING PERFORMANCE ASSESSMENT *Conducting experiments. *Writing extended essays. *Doing mathematical computations. Performance assessment is best understood as a continuum of assessment formats ranging from the simplest student-constructed responses to comprehensive demonstrations or collections of work over time. Whatever format, common features of performance assessment involve: 1. Students' construction rather than selection of a response. 2. Direct observation of student behavior on tasks resembling those commonly required for functioning in the world outside school. 3. Illumination of students' learning and thinking processes along with their answers (OTA, 1992). Performance assessments measure what is taught in the curriculum. There are two terms that are core to depicting performance assessment: 1. Performance: A student's active generation of a response that is observable either directly or indirectly via a permanent product. 2. Authentic: The nature of the task and context in which the assessment occurs is relevant and represents "real world" problems or issues.
HOW DO YOU ADDRESS VALIDITY IN PERFORMANCE ASSESSMENTS? 1. Have meaning for students and teachers and motivate high performance. 2. Require the demonstration of complex cognition, applicable to important problem areas. 3. Exemplify current standards of content or subject matter quality. 4. Minimize the effects of ancillary skills that are irrelevant to the focus of assessment. 5. Possess explicit standards for rating or judgment. When considering the validity of a performance test, it is important to first consider how the test or instrument "behaves" given the content covered. Questions should be asked such as: *How does this test relate to other measures of a similar construct? *Can the measure predict future performances? *Does the assessment adequately cover the content domain? It is also important to review the intended effects of using the assessment instrument. Questions about the use of a test typically focus on the test's ability to reliably differentiate individuals into groups and guide the methods teachers use to teach the subject matter covered by the test. A word of caution: Unintended uses of assessments can have precarious effects. To prevent the misuse of assessments, the following questions should be considered: *Does use of the instrument result in discriminatory practices against various groups of individuals? *Is it used to evaluate others (e.g., parents or teachers) who are not directly assessed by the test?
PROVIDING EVIDENCE FOR THE RELIABILITY AND VALIDITY OF 1. Assessment as a Curriculum Event. Externally mandated assessments that bear little, if any, resemblance to subject area domain and pedagogy cannot provide a valid or reliable indication of what a student knows and is able to do. The assessment should reflect what is taught and how it is taught. Making an assessment a curriculum event means reconceptualizing it as a series of theoretically and practically coherent learning activities that are structured in such a way that they lead to a single predetermined end. When planning for assessment as a curriculum event, the following factors should be considered: *The content of the instrument. *The length of activities required to complete the assessment. *The type of activities required to complete the assessment. *The number of items in the assessment instrument. *The scoring rubric. 2. Task Content Alignment with Curriculum. Content alignment between what is tested and what is taught is essential. What is taught should be linked to valued outcomes for students in the district. 3. Scoring and Subsequent Communications with Consumers. In large scale assessment systems, the scoring and interpretation of performance assessment instruments is akin to a criterion-referenced approach to testing. A student's performance is evaluated by a trained rater who compares the student's responses to multitrait descriptions of performances and then gives the student a single number corresponding to the description that best characterizes the performance. Students are compared directly to scoring criteria and only indirectly to each other. In the classroom, every student needs feedback when the purpose of performance assessment is diagnosis and monitoring of student progress. Students can be shown how to assess their own performances when: *The scoring criteria are well articulated. *Teachers are comfortable with having students share in their own evaluation process. 4. Linking and Comparing Results Over Time. Linking is a generic term that includes a variety of approaches to making results of one assessment comparable to those of another. Two appropriate and manageable approaches to linking in performance assessment include: *Statistical Moderation. This approach is used to compare performances across content areas for groups of students who have taken a test at the same point in time. *Social Moderation. This is a judgmental approach that is built on consensus of raters. The comparability of scores assigned depends substantially on the development of consensus among professionals.
HOW CAN TEACHERS INFLUENCE STUDENTS' PERFORMANCES? When using performance assessments, students' performances can be positively influenced by: 1. Selecting assessment tasks that are clearly aligned or connected to what has been taught. 2. Sharing the scoring criteria for the assessment task with students prior to working on the task. 3. Providing students with clear statements of standards and/or several models of acceptable performances before they attempt a task. 4. Encouraging students to complete self-assessments of their performances. 5. Interpreting students' performances by comparing them to standards that are developmentally appropriate, as well as to other students' performances.
REFERENCES U.S. Congress, Office of Technology Assessment. (1992, February). Testing in American schools: Asking the right questions. (OTA-SET-519). Washington, DC: U.S. Government Printing Office. Derived from: Elliot, S. N. (1994). Creating Meaningful Performance Assessments: Fundamental Concepts. Reston, VA: The Council for Exceptional Children. Product #P5059. ERIC Digests are in the public domain and may be freely reproduced and disseminated. This publication was prepared with funding from the National Library of Education (NLE), Office of Educational Research and Improvement, U.S. Department of Education, under contract no. RR93002005. The opinions expressed in this report do not necessarily reflect the positions or policies of NLE, OERI, or the Department of Education
Title: Creating Meaningful Performance Assessments. ERIC Digest E531. Descriptors: Definitions; Elementary Secondary Education; * Evaluation Methods; Guidelines; * Performance; * Student Evaluation; Test Reliability; Test Validity Identifiers: ERIC Digests; *Performance Based Evaluation http://ericae.net/edo/ED381985.htm |
|
|||
Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at
ericae.net. |