>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

 


ERIC Documents Database Citations & Abstracts for Constructing Scoring Rubrics


Instructions for ERIC Documents Access

Search Strategy:
Scoring Rubrics [ERIC Identifier] OR Rubric* [title word]
AND
Test Construction OR Student Evaluation OR Evaluation Criteria OR Evaluation Methods OR Performance Based Assessment [as ERIC Descriptors]

  ED424257  TM029129
  Rubrics: A Handbook for Construction and Use.
  Taggart, Germaine L., Ed.; Phifer, Sandra J., Ed.; Nixon, Judy A., 
Ed.; Wood, Marilyn, Ed.
  1998
  152p.
  ISBN: 1-56676-652-4
  Available From: Technomic Publishing Company, Inc., 851 New Holland 
Avenue, Box 3535, Lancaster, PA 17604; Tel: 800-233-9936 (Toll Free); 
Web Site: http://www.techpubs.com ($34.95).
  Document Type: BOOK (010);  COLLECTION (020);  TEACHING GUIDE (052)
  Target Audience: Practitioners; Teachers
  This handbook provides teachers and administrators with strategies 
to construct, adapt, and use rubrics.  A rubric is defined as a tool 
for assessing instruction and performance according to predetermined 
expectations and criteria.  The chapters in this text contain 
guidance on formulating, applying, and reviewing the pros and cons of 
this form of alternative assessment.  Cross-curricular rubrics are 
provided, which should be useful in many classroom scenarios either 
as constructed or after being adapted to meet the needs of the 
classroom situation.  The following chapters are included: (1) "Assess
ment That Drives Instruction" (Ethel Edwards); (2) "Program Rubrics" 
(Deliece Mullen, Judy A. Nixon, Sandra J. Phifer, Germaine L. 
Taggart, and Marilyn Wood); (3) "Student Implementation of the 
Rubric" (Nancy Harman); (4) "Rubrics: Setting Criteria for Young 
Learners" (Sandra J. Phifer and Judy A. Nixon); (5) "Rubrics: A Cross-
Curricular Approach to Assessment" (Germaine L. Taggart and Marilyn 
Wood); (6) "Reading, Writing, and Classroom Rubrics: Ways To Motivate 
Quality Learning" (Craig S. Shwery); (7) "Using Rubrics in Specialty 
Areas" (Sandra J. Phifer); (8) "Student Computer Use and Assessment" 
(John Neal); (9) "The Diverse Learner: Setting Meaningful Criteria" 
(Juliann Bliese); and (10) "Rubrics: A Tool for Ongoing Teacher Evalua
tion" (Germaine L. Taggart).  (Contains 79 figures.) (SLD)
  Descriptors: Computer Assisted Testing; *Criteria; Diversity 
(Student); *Educational Assessment; Elementary Secondary Education;
Performance Based Assessment; *Student Evaluation; Teacher Evaluation; 
*Test Construction; Test Use
  Identifiers: *Scoring Rubrics

  
  ED423522  CS216463
  35 Rubrics and Checklists To Assess Reading and Writing: Time-
Saving Reproducible Forms for Meaningful Literacy Assessment.
  Fiderer, Adele
  1998
  81p.
  ISBN: 0-590-13102-8
  Available From: Scholastic, Inc., Penguin USA, P.O. Box 120, 
Bergenfield, NJ 07621; Tel: 800-526-0275 (Toll-Free); Fax: 201-385-
6521.
  Document Type: BOOK (010);  TEACHING GUIDE (052)
  Target Audience: Practitioners; Teachers
  Intended for teachers of grades K-2, this book provides rubrics 
developed by and with teachers to assess a wide variety of reading 
and writing activities in primary classrooms.  The rubrics and 
checklists are offered as reproducible forms.  Most of the rubrics 
are accompanied by a matching form for children's self-assessment.  
Many of the assessment tasks include planning or graphic organizers 
that encourage children to write, draw, map, or outline their ideas 
before they begin to create their final work products.  Sections in 
the book are: Writing; Spelling; Oral Reading Fluency; Reading 
Comprehension; Using Book Covers as Assessment of Literary Knowledge; 
Content Area Learning; Create Your Own Rubrics; Observational 
Checklists for Assessing Early Reading Skills; and a Class Record 
Form.  (CR)
  Descriptors: Classroom Techniques; Early Reading; Learning 
Strategies; Primary Education; *Reading Achievement; *Reading 
Instruction; Reading Skills; Self Evaluation (Individuals); *Spelling 
Instruction; Teacher Developed Materials; Teaching Methods; *Writing E
valuation; *Writing Instruction; Writing Processes
  Identifiers: Graphic Organizers

  
  ED421542  TM028868
  A Collection of Performance Tasks and Rubrics. High School 
Mathematics.
  Danielson, Charlotte; Marquez, Elizabeth
  1998
  212p.; For collections for upper elementary and middle school, see 
TM 028 866-867 [which follow immediately in this bibliography].
  ISBN: 1-883001-49-8
  Available From: Eye on Education, 6 Depot Way West, Suite 106, 
Larchmont, NY 10538 ($26.95).
  Document Type: BOOK (010);  NON-CLASSROOM MATERIAL (055)
  Target Audience: Practitioners; Teachers
  This book is a guide to the development and use of performance 
tasks and rubrics in the high school mathematics classroom.  It 
contains a rationale for, and a discussion of strengths and 
limitations of, performance tasks to assess student achievement and 
progress in mathematics.  A field-tested process is offered for 
developing performance tasks and rubrics.  Chapter 1, "Introduction," 
provides an introduction to performance assessment and how it differs 
from traditional testing.  Chapter 2, "Why Performance Assessment," 
presents the rationale for performance assessment and compares its 
strengths and weaknesses in comparison with traditional testing.  In 
Chapter 3, "Making an Evaluation Plan," there are suggestions for 
making an evaluation plan and linking that plan to the overall 
approach to curriculum development.  Chapter 4, "Evaluating Complex 
Performance," contains an overview of evaluating complex performance 
and includes a description of evaluating nonschool, yet complex, 
performance that can be used in a workshop setting to introduce 
educators to performance assessment.  Chapters 5 and 6, "Creating a 
Performance Task" and "Creating a Rubric," offer the step-by-step 
procedure for creating a performance task and a rubric for classroom 
use, and Chapter 7, "Adapting Existing Performance Tasks and 
Rubrics," suggests techniques for adapting an existing performance 
task to the specific classroom.  Chapter 8, "High School Mathematics 
Performance Tasks," offers performance tasks for the major topics in 
high school mathematics.  An appendix contains handouts to be 
distributed to students for each of the 21 tasks.  (Contains nine 
figures.) (SLD)
  Descriptors: Achievement Tests; Curriculum Development; Evaluation Methods;
High School Students; *High Schools; *Mathematics Tests; *Performance Based
Assessment; Scoring; *Secondary School Mathematics; Student Evaluation; *Test
Construction; Test Use
  Identifiers: *Scoring Rubrics


  ED421541  TM028867
  A Collection of Performance Tasks and Rubrics. Middle School 
Mathematics.
  Danielson, Charlotte
  1997
  200p.; For collections for upper elementary school and high school, 
see TM 028 866 and 028 868.
  ISBN: 1-883001-33-1
  Available From: Eye on Education, 6 Depot Way West, Suite 106, 
Larchmont, NY 10538 ($26.95).
  Document Type: BOOK (010);  NON-CLASSROOM MATERIAL (055)
  Target Audience: Practitioners; Teachers
  This book is a guide to the development and use of performance 
tasks and rubrics in the middle school mathematics classroom.  It 
contains a rationale for, and a discussion of strengths and 
limitations of, performance tasks to assess student achievement and 
progress in mathematics.  A field-tested process is offered for 
developing performance tasks and rubrics.  Chapter 1, "Introduction," 
provides an introduction to performance assessment and how it differs 
from traditional testing.  Chapter 2, "Why Performance Assessment," 
presents the rationale for performance assessment and compares its 
strengths and weaknesses in comparison with traditional testing.  In 
Chapter 3, "Making an Evaluation Plan," there are suggestions for 
making an evaluation plan and linking that plan to the overall 
approach to curriculum development.  Chapter 4, "Evaluating Complex 
Performance," contains an overview of evaluating complex performance 
and includes a description of evaluating nonschool, yet complex, 
performance that can be used in a workshop setting to introduce 
educators to performance assessment.  Chapters 5 and 6, "Creating a 
Performance Task" and "Creating a Rubric," offer the step-by-step 
procedure for creating a performance task and a rubric for classroom 
use, and Chapter 7, "Adapting Existing Performance Tasks and 
Rubrics," suggests techniques for adapting an existing performance 
task to the specific classroom.  Chapter 8, "Middle School 
Mathematics Performance Tasks," offers performance tasks for the 
major topics in middle school mathematics.  An appendix contains 
handouts to be distributed to students for each of the 24 tasks.  (Contains
nine figures.) (SLD)
  Descriptors: Achievement Tests; Curriculum Development; *Elementary 
School Mathematics; Evaluation Methods; Intermediate Grades; Junior 
High Schools; *Mathematics Tests; *Middle Schools; *Performance Based 
Assessment; Scoring; *Secondary School Mathematics; Student Evaluation;
*Test Construction; Test Use
  Identifiers: Middle School Students; *Scoring Rubrics

  
  ED421540  TM028866
  A Collection of Performance Tasks and Rubrics. Upper Elementary 
School Mathematics.
  Danielson, Charlotte
  1997
  209p.; For collections for middle school and high school, see TM 
028 867-868.
  ISBN: 1-883001-39-0
  Available From: Eye on Education, 6 Depot Way West, Suite 106, 
Larchmont, NY 10538 ($26.95).
  Document Type: BOOK (010);  NON-CLASSROOM MATERIAL (055)
  Target Audience: Practitioners; Teachers
  This book is a guide to the development and use of performance 
tasks and rubrics in the upper elementary school mathematics 
classroom.  It contains a rationale for, and a discussion of 
strengths and limitations of, performance tasks to assess student 
achievement and progress in mathematics.  A field-tested process is 
offered for developing performance tasks and rubrics.  Chapter 1, 
"Introduction," provides an introduction to performance assessment and
 how it differs from traditional testing.  Chapter 2, "Why 
Performance Assessment," presents the rationale for performance assess
ment and compares its strengths and weaknesses to those of 
traditional testing.  In Chapter 3, "Making an Evaluation Plan," 
there are suggestions for making an evaluation plan and linking that 
plan to the overall approach to curriculum development.  Chapter 4,
"Evaluating Complex Performance," contains an overview of evaluating com
plex performance and includes a description of evaluating nonschool, 
yet complex, performance that can be used in a workshop setting to 
introduce educators to performance assessment.  Chapters 5 and 6, 
"Creating a Performance Task" and "Creating a Rubric," offer the step-
by-step procedure for creating a performance task and a rubric for 
classroom use, and Chapter 7, "Adapting Existing Performance Tasks 
and Rubrics," suggests techniques for adapting an existing 
performance task to the specific classroom.  Chapter 8, "Upper 
Elementary School Mathematics Performance Tasks," offers performance 
tasks for the major topics in upper elementary school mathematics.  An
 appendix contains handouts to be distributed to students for each of 
the 24 tasks.  (Contains nine figures.) (SLD)
  Descriptors: Achievement Tests; Curriculum Development; Elementary 
Education; *Elementary School Mathematics; Evaluation Methods; *Interm
ediate Grades; *Mathematics Tests; *Performance Based Assessment; Scoring;
Student Evaluation; *Test Construction; Test Use
  Identifiers: *Scoring Rubrics

  
  EJ552987  SE558624
  Developing Alternative Assessments Using the Benchmarks.
  Shepardson, Daniel P.; Jackson, Vicki
  Science and Children, v35 n2 p34-40 Oct   1997
  Document Type: TEACHING GUIDE (052);  JOURNAL ARTICLE (080);  GENERAL
REPORT (140)
  Describes a process for developing alternative assessment 
instruments and student responses and illustrates the use of scoring 
rubrics.  Focus is on the student's thought processes and student 
performance.  (AIM)
  Descriptors: Elementary Education; Evaluation Criteria; *Evaluation 
Methods; Evaluation Needs; *Science Education; *Science Process 
Skills; Science Programs; Scoring
  Identifiers: Alternative Assessment; *Scoring Rubrics

  
  EJ552014  EA533913
  What's Wrong--and What's Right--with Rubrics.
  Popham, W. James
  Educational Leadership, v55 n2 p72-75 Oct   1997
  Document Type: JOURNAL ARTICLE (080);  EVALUATIVE REPORT (142)
  The term "rubric" refers to a scoring guide used to evaluate the 
quality of students' constructed responses (written compositions, 
oral presentations, or science projects).  Although educators rave 
about rubrics, the vast majority are instructionally fraudulent.  Problems
arise when rubrics are too task-specific or general or lengthy 
and confuse the skill tested with the test itself.  (MLH)
  Descriptors: Definitions; Elementary Secondary Education; *Evaluatio
n Criteria; *Grading; Guidelines; Holistic Approach; Misconceptions; 
*Student Evaluation
  Identifiers: Analytic Approach; *Scoring Rubrics

  
  EJ538265  SE557219
  Design Your Own Rubric.
  Luft, Julie
  Science Scope, v20 n5 p25-27 Feb   1997
  ISSN: 0887-2376
  Document Type: TEACHING GUIDE (052);  JOURNAL ARTICLE (080)
  Discusses the use of rubrics in assessment in science education. 
Highlights types of rubrics and benefits of rubrics in the classroom.  
Outlines steps to assist educators in constructing a first rubric or 
refining a current rubric.  (JRH)
  Descriptors: Elementary Secondary Education; *Evaluation; *Science 
Education

  
  EJ537326  CS752751
  Creating a Two-Tiered Portfolio Rubric.
  Burch, C. Beth
  English Journal, v86 n1 p55-58 Jan   1997
  Special Issue: Alternative Assessment.
  ISSN: 0013-8274
  Document Type: PROJECT DESCRIPTION (141);  TEACHING GUIDE (052);  JOURNAL
ARTICLE (080)
  Describes how to create a rubric for portfolios which includes 
quantity and quality.  Presents suggestions for implementing a two-
tiered portfolio rubric.  (RS)
  Descriptors: Classroom Techniques; Evaluation Methods; Grading; *Portfolio
Assessment; *Portfolios (Background Materials); Program Implementation;
Secondary Education; *Student Evaluation
  Identifiers: Alternative Assessment


  EJ528634  SP525293
  Establishing Validity for Performance-Based Assessments: An 
Illustration for Collections of Student Writing.
  Novak, John R.; And Others
  Journal of Educational Research, v89 n4 p220-33 Mar-Apr 
  1996
  ISSN: 0022-0671
  Available From: UMI
  Document Type: RESEARCH REPORT (143);  JOURNAL ARTICLE (080)
  Techniques for establishing the reliability and validity of student 
writing assessment are presented.  Raters scored collections of 
elementary students' narrative writing with holistic scores from two 
rubrics (one established and one new, performance-based rubric).  The 
new rubric proved reliable and valid, though correlational patterns 
were not clear.  (SM)
  Descriptors: Elementary Education; Elementary School Students; Evalu
ation Methods; *Performance Based Assessment; Personal Narratives; Portfolio
Assessment; *Student Evaluation; *Test Reliability; *Test Validity; *Writing
Skills
  Identifiers: California; *Scoring Rubrics

  
  EJ516639  IR532244
  The Multimedia Report: Rubrics--Keys to Improving Multimedia 
Presentations.
  Tuttle, Harry Grover
  MultiMedia Schools, v3 n1 p30-33 Jan-Feb   1996
  Journal availability: Online, Inc., Subscription Dept., 462 Danbury 
Rd., Wilton, CT 06897.
  ISSN: 1075-0479
  Document Type: TEACHING GUIDE (052);  JOURNAL ARTICLE (080)
  Discusses the use of evaluation criteria or "rubrics" by teachers 
to assess student-produced multimedia presentations and by students 
to guide them through their work.  Describes types of rubrics and 
presents guidelines to help teachers create and use rubrics.  Conclude
s that rubrics can improve presentations by allowing students to be 
in control of their learning.  (JMV)
  Descriptors: *Evaluation Criteria; Evaluation Methods; *Guidelines; 
*Learner Controlled Instruction; *Multimedia Materials; *Student Evalu
ation; Student Improvement; *Student Projects; Teacher Student 
Relationship
  Identifiers: Presentation Mode

 
  ED411273  TM027246
  Creating Rubrics through Negotiable Contracting and Assessment.
  Stix, Andi
  1996
  9p.; Paper presented at the National Middle School Conference 
(Baltimore, MD, November 1, 1996). For related document, see TM 027 
247.
  Document Type: PROJECT DESCRIPTION (141);  CONFERENCE PAPER (150)
  An approach to assessment is described that allows students to 
understand and help decide the criteria for good work.  It is called 
"negotiable contracting." Negotiable contracting makes assessment a 
highly individualized process that recognizes the subtly different 
ways in which students master skills.  Students and teachers jointly 
create a ratings chart called a rubric.  The rubric specifically 
identifies and ranks the criteria for assessing students' performance.  
Inside the rubric the criteria for each level of achievement are 
explained in detail, along with the weight to be given to each skill.  
Students involved in developing a rubric are more clear about the 
skills they need to master a lesson and how well they are progressing.  
As added reinforcement of the lesson, students work cooperatively in 
small groups to try out their ideas.  The rankings used in a rubric 
should be neutral words that avoid the implication of failure 
inherent in a generalized A-F or numerical grade.  The rubric should 
also have an even number of ratings to eliminate the temptation to 
award a middle ranking.  Along with the rubrics developed for 
individual lessons, each student's assessment should encompass a look 
at the progress the student has made during the year.  Examples of 
work should be collected into a portfolio for an end-of-the-year asses
sment.  It is critical to the success of negotiable contracting to 
have the understanding and support of parents, who are probably more 
familiar with a traditional grading system.  (Contains three tables.) 
(SLD)
  Descriptors: *Cooperative Learning; *Educational Assessment; Grading;
Intermediate Grades; Junior High Schools; Middle Schools; Parent 
Participation; *Participative Decision Making; Performance Contracts; 
*Portfolio Assessment; Rating Scales; Reinforcement; *Student 
Participation
  Identifiers: *Scoring Rubrics

  
  ED401899  IR056142
  RUBRICS for the Assessment of Information Literacy.
  Colorado State Dept. of Education, Denver. State Library and Adult 
Education Office.  Jun 1996
  22p.; Based on the Information Literacy Guidelines for Colorado 
Students, Teachers and School Library Media Specialists.
  Document Type: INSTRUCTIONAL MATERIAL (051);  TEACHING GUIDE (052)
  A rubric is a descriptive measurement for defining what a learner 
should know and can do.  This document was created to define the 
knowledge and ability of every student in how they: construct meaning 
from information; create a quality product; learn independently; 
participate as a group member; and use information and information 
technologies responsibly and ethically.  The rubrics are designed in 
a matrix, or grid, of benchmarks which define the information 
literate student.  The far left column contains the Target 
Indicators, or the individual components of each of the five 
information literacy guidelines.  Each target indicator is followed 
by four qualities, or key behavior skills, to be measured.  These are 
written in student language and are labeled "In Progress," 
"Essential," "Proficient," and "Advanced." The first page provides an 
overview for all five guidelines; pages 2-8 address specific 
benchmarks.  The final page is a checklist of information literacy 
guidelines for students and teachers which may be used in the assessment
process.  These guidelines describe students as: knowledge seekers; quality
producers; self-directed learners; group The ideal application and use of
these assessments is in a collaborative curriculum involving the student,
teacher, media specialist, and other stakeholders in the school environment. 
They are applicable to all grades and content areas.  (Author/AEF)
  Descriptors: Computer Literacy; Cooperative Learning; *Educational
Assessment; Guidelines; Independent Study; *Information Literacy; Infor
mation Technology; Learner Controlled Instruction; *Library Skills;
*Measurement Techniques; Skill Development; *Users (Information)

  
  ED401309  TM025870
  Constructing Scoring Rubrics: Using "Facets" To Study Design 
Features of Descriptive Rating Scales.
  Myford, Carol M.; And Others
  Apr 1996
  61p.; Paper presented at the Annual Meeting of the American 
Educational Research Association (New York, NY, April 8-12, 1996).
  Document Type: EVALUATIVE REPORT (142);  CONFERENCE PAPER (150)
  Developing scoring rubrics to evaluate student work was studied, 
concentrating on the use of intermediate points in rating scales.  How
 scales that allow for intermediate points between defined categories 
should be constructed and used was explored.  In the recent National A
ssessment of Educational Progress (NAEP) visual arts field test, 
researchers experimented with several formats for constructing scoring
 rubrics.  Some descriptive graphic rating scales (continuous score 
scales) were pilot tested by 11 raters who scored the NAEP visual 
arts test for grades 4 and 8. Descriptive graphic ratings were 
designed to evaluate 4 test production blocks from the assessment, for
 a total of 50 pieces of student work.  The "Facets" computer 
software was used to analyze the rating data.  Raters were able to 
use the descriptive rating scales reliably.  Some of the constructed s
cales were able to support 7 to 10 rating points rather than the 
traditional 3 or 4 points.  However, there was little appreciable 
gain in reliability for scales having more than five points.  The 
particular features of the scale (such as defined midpoint) were not 
as important as the knowledge, skills, and motivation of the rater.  
An appendix contains the graphic rating scales.  (Contains 2 figures, 
11 tables, and 32 references.) (SLD)
  Descriptors: *Evaluators; *Rating Scales; *Scoring; *Student Evaluation;
*Test Construction; Test Use; Visual Arts
  Identifiers: *FACETS Computer Program; FACETS Model; National Assessment of
Educational Progress; *Scoring Rubrics

  
  EJ512702  SE554938
  Demonstration Assessment: Measuring Conceptual Understanding and 
Critical Thinking with Rubrics.
  Radford, David L.; And Others
  Science Teacher, v62 n7 p52-55 Oct   1995
  ISSN: 0036-8555
  Document Type: TEACHING GUIDE (052);  JOURNAL ARTICLE (080)
  Target Audience: Teachers; Practitioners
  Presents the science demonstration assessment as an authentic- assessment
technique to assess whether students understand basic science 
concepts and can use them to solve problems.  Uses rubrics to prepare 
students for the assessment and to assign final grades.  Provides 
examples of science demonstration assessments and the scoring of 
rubrics in the topics of acids/bases and surface tension.  (JRH)
  Descriptors: Critical Thinking; Demonstrations (Science); *Evaluation;
Problem Solving; *Science Activities; Science Education; *Science 
Instruction; Science Process Skills; *Scientific Concepts; Secondary 
Education; Secondary School Science; Teaching Methods
  Identifiers: Authentic Assessment

  
  EJ512701  SE554937
  Rubrics Revisited: Allowing Students to Assume Responsibility for 
the Quality of Their Work.
  Liu, Katherine
  Science Teacher, v62 n7 p49-51 Oct   1995
  ISSN: 0036-8555
  Document Type: PROJECT DESCRIPTION (141);  JOURNAL ARTICLE (080)
  Target Audience: Teachers; Practitioners
  Describes various aspects and advantages of the use of rubrics as 
tools for assessment.  Presents additive rubrics as assessment tools 
that allow students to assume responsibility for the quantity and 
quality of their work and to see its value beyond the letter grade 
they receive.  (JRH)
  Descriptors: *Evaluation; Science Education; *Science Instruction; 
Secondary Education; Secondary School Science; *Teaching Methods

  
  EJ509095  SE554602
  Effective Rubric Design: Making the Most of this Powerful Assessment
 Tool.
  Jensen, Ken
  Science Teacher, v62 n5 p34-37 May   1995
  ISSN: 0036-8555
  Document Type: TEACHING GUIDE (052);  JOURNAL ARTICLE (080)
  Target Audience: Teachers; Practitioners
  Presents examples of one rubric style and different ways that 
rubrics can be used.  Uses the example of developing student 
understanding of how energy moves through the atmosphere.  (MKR)
  Descriptors: Science Education; *Science Instruction; Secondary 
Education; Secondary School Science; *Student Evaluation; Teaching 
Methods
  Identifiers: *Alternative Assessment

  
  ED407413  TM026429
  Learning in Overdrive: Designing Curriculum, Instruction, and 
Assessment from Standards. A Manual for Teachers.
  Mitchell, Ruth; Willis, Marilyn
  1995
  148p.; Assisted in authorship by the Chicago Teachers Union Quest 
Center.
  ISBN: 1-55591-933-2
  Available From: North American Press, 350 Indiana Street, Suite 
350, Golden, CO 80401-5093; phone: 800-992-2908 ($17).
  Document Type: BOOK (010);  TEACHING GUIDE (052)
  Target Audience: Practitioners; Teachers
  The most important issue in education today is helping students 
reach high standards.  These standards are changing American 
education from a system driven by inputs and regulations to one 
judged by results.  This manual is intended for use with any set of 
standards.  Followed step by step, it will take the teacher from the 
abstract statements of the standards to units of instruction.  The 
process begins with the standards and shows teachers how to connect 
them into interdisciplinary clusters, how to devise real-world tasks 
that embody the standards, and how to break the unit into learning 
segments that enable students to complete the tasks and attain the 
standards.  The nine steps to standards are listed as: (1) "Selecting 
Standards"; (2) "What's in a Standard"; (3) "The Legbone's Connected 
to the Kneebone"; (4) "The Real World"; (5) "The Final Culminating 
Task"; (6) "Mapping Backward from the Culminating Task into Learning 
Sections"; (7) "Rubrics and Scoring"; (8) "Polishing the Stone"; and 
(9) "Seeing the Whole." Appendixes list standards documents, present 
forms to use in the curriculum development process, and summarize the 
nine steps.  Step Seven includes a detailed explanation of 
performance assessment, with discussion of portfolios, exhibitions, 
and the construction of rubrics.  (SLD)
  Descriptors: *Academic Achievement; *Curriculum Development; *Educational
Assessment; Elementary Secondary Education; *Instructional 
Design; Interdisciplinary Approach; Performance Based Assessment; *Scoring;
Selection; *Standards; Teaching Methods; Test Construction
  Identifiers: Scoring Rubrics; Standard Setting

  
  ED391833  TM024434
  Authentic Assessment, Professional's Guide.
  Ryan, Concetta Doti
  1994
  76p.
  ISBN: 1-55734-838-3
  Available From: Teacher Created Materials, Inc., 6421 Industry Way, 
Westminster, CA 92683 (Order Number TCM 838).
  Document Type: BOOK (010);  NON-CLASSROOM MATERIAL (055)
  Authentic assessment is the process of gathering evidence and 
documenting student learning and growth in an authentic context.  Authentic
assessment can do a better job than more traditional forms of 
assessment in informing educators and parents about a student's real 
achievement.  The first chapter of this book presents an overview of 
authentic assessment, its origins, and its goals.  The next four 
chapters focus on authentic assessments that can be used across the 
curriculum: (1) portfolios; (2) performance assessment; (3) rubrics; 
and (4) observation-based assessment processes.  The following 
chapter focuses on the inclusion of students and parents in the 
assessment process.  The final four chapters identify authentic 
assessments that can be used specifically with the content areas of 
language arts, mathematics, science, and social studies.  A list of 
professional resource organizations is included.  (Contains 2 
samples, 10 checklists and forms for assessment processes, and 28 
references.) (SLD)
  Descriptors: Academic Achievement; *Educational Assessment; Language
 Arts; Mathematics; *Observation; Parent Participation; *Portfolios 
(Background Materials); Sciences; *Scoring; Social Studies; Student 
Evaluation; *Test Construction
  Identifiers: *Authentic Assessment; *Performance Based Evaluation; 
Scoring Rubrics

  
  ED358143  TM019932
  Designing Scoring Rubrics for Performance Assessments: The Heart of 
the Matter.
  Arter, Judy
  Northwest Regional Educational Lab., Portland, OR. Test Center.
  Apr 1993
  25p.; Paper presented at the Annual Meeting of the American 
Educational Research Association (Atlanta, GA, April 12-16, 1993).
  Document Type: EVALUATIVE REPORT (142);  CONFERENCE PAPER (150)
  Good performance criteria can and must help define educational 
goals and serve as an instructional tool in the classroom.  The 
rationale for considering instructional usefulness when designing 
performance criteria begins with the proposition that clearly stated 
performance criteria are excellent instructional tools.  It is also 
apparent that classroom teachers are the ones who will be 
administering the performance assessments that are developed, and 
that the classroom is the place in which change will occur.  Good 
performance criteria help teachers and students alike understand the 
targets of instruction.  The following design considerations are 
important in working toward the goal of good performance criteria: 
(1) the need for generalized criteria; (2) development of both 
holistic and analytical trait systems; (3) covering all that is 
important; and (4) having teachers do the scoring.  Maximizing the 
impact of the performance assessment dollar means having assessments that
teachers can use in the classroom.  Five figures list and 
illustrate aspects of the criteria development process.  (SLD)
  Descriptors: Academic Standards; Cost Effectiveness; *Educational As
sessment; Educational Objectives; *Evaluation Criteria; Holistic 
Approach; *Instructional Improvement; Research Design; *Scoring; *Stud
ent Evaluation; Teacher Role; Teaching Methods; *Test Construction; Test Use
  Identifiers: *Performance Based Evaluation; Scoring Rubrics

  
  ED358114  TM019880
  Scoring Rubrics for Performance Tests: Lessons Learned from Job 
Performance Assessment in the Military.
  Wise, Lauress
  Defense Manpower Data Center, Monterey, CA.  Apr 1993
  16p.; Paper presented at the Annual Meeting of the National Council 
on Measurement in Education (Atlanta, GA, April 13-15, 1993).
  Document Type: EVALUATIVE REPORT (142);  CONFERENCE PAPER (150)
  Industrial and organizational psychologists for the Department of 
Defense have been working for the past 10 years to develop high 
fidelity measures of job performance for use in validating job 
selection procedures and standards.  Information on developing and 
scoring performance exercises in the Job Performance Measurement 
(JPM) Project is presented, and lessons that might be useful in 
education are extracted.  In many ways, the task of the industrial 
psychologist is easier than that of the educator because of broader 
agreement about how the task should be performed and close alignment 
between training and expected performance.  Tasks identified by each 
Armed Service were analyzed, and scoring rules were developed.  The 
following lessons seem especially pertinent to educational 
assessment: (1) careful specification of the domains assessed is 
essential for evaluating the adequacy of any sample selected; (2) 
scoring elements that assess adherence to processes that are taught 
will have better diagnostic value (and possibly greater validity) 
than will those that just reflect the quality of output; (3) scoring 
procedures must be anchored to observable criteria; and (4) 
generalizability theory provides a useful framework for evaluating alt
ernative scoring rubrics.  One table lists the JPM occupational 
specialties, and two figures illustrate the discussion.  An 
attachment summarizes the lessons to be learned.  (SLD)
  Descriptors: Educational Assessment; Educational Research; Evaluation
Methods; Generalizability Theory; Industrial Psychology; *Job 
Performance; *Military Personnel; *Occupational Tests; Organizational 
Development; *Performance Tests; Personnel Evaluation; Personnel 
Selection; *Scoring; Standards; Test Construction; Training
  Identifiers: Department of Defense; Job Performance Measurement 
Project; *Performance Based Evaluation

Return to FAQ on Scoring Rubrics

Return to the Index of FAQs


Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership