|From the ERIC database
Evaluating Student Writing: Methods and Measurement. ERIC Digest.
Persons involved in the field of composition have sought continuously over the past two decades to shape and refine discourse theory and develop more effective classroom methods of evaluation. A careful look at these efforts suggests that the material dealing with evaluating writing is not unlike the body of a hydra: we have one theoretical body supporting two heads. Using one of the heads, we develop various methods to critique or respond to students' written products (even as these products represent a stage in the writing process); with the other head we devise ways to measure or assess the quality of the written product according to some value system. This digest will consider (1) the methods of response and (2) the measurement of quality as represented by effective classroom teaching methods.
METHODS OF RESPONSE
Effective time-saving techniques which reflect this philosophy were gathered from research articles by Fuery and Standford and classified by Krest (1987). Peer revision, peer editing, peer grading, computer programs, conferences, and a system of error analysis are presented as effective measures which enhance individual development as well as encourage more student writing.
Noting that research has shown teacher comment has little effect on the quality of student writing, Grant-Davie and Shapiro (1987) suggest teachers should view comments as rhetorical acts, think about their purpose for writing them, and teach students to become their own best readers. To achieve this goal, teachers should respond to student drafts with fewer judgments and directives and more questions and suggestions. Grant-Davie and Shapiro also outline the use of a workshop which utilizes peer editing and revision.
Similarly, Whitlock (1987) explains how Peter Elbow's concepts of "pointing," "summarizing," "telling," and "showing" can form the basis of an effective method for training students to work in writing groups and give reader-based feedback to peer writing.
MEASURING WRITING QUALITY
To measure growth in the use of these conventions, an analytic scale analysis of skills (Cooper and Odell, 1977) can be developed and used effectively with samples of students' writing. This instrument describes briefly, in non-technical language, what is considered to be high, mid, and low quality levels in the following areas: (1) the student's ability to use words accurately and effectively; (2) the ability to use standard English; (3) the ability to use appropriate punctuation; and (4) the ability to spell correctly. Each of these skills is ranked for each paper on a continuum from 1 (low) to 6 (high) (Hyslop, 1983).
In addition to these instruments, various teacher/writers in the field share the following strategies they have developed for measuring writing quality.
Teale (1988) insists that informal observations and structured performance sample assessments are more appropriate than standardized tests for measuring quality in early childhood literacy learning. For example, when young children are asked to write and then read what they write, the teacher can learn a great deal about their composing strategies and about their strategies for encoding speech in written language. Krest (1987) provides helpful techniques of a general nature to show teachers how to give students credit for all their work and how to spend less time doing it. These techniques involve using holistic scoring, using a somewhat similar technique of general comments, and using the portfolio. Harmon (1988) suggests that teachers should withhold measuring students' progress until a suitable period of time has elapsed which allows for measurable growth, and then measure the quality of selected pieces of writing at periodic intervals.
Cooper and Odell (1977) suggest that teachers can eliminate much of the uncertainty and frustration of measuring the quality of these samples if they will identify limited types of discourse and create exercises which stimulate writing in the appropriate range but not beyond it. In their model, they present explanatory, persuasive, and expressive extremes as represented by the angles of the triangle. Each point is associated with a characteristic of language related to a goal of writing, with assignments and the resulting measure of quality focused on that particular goal.
Krest (1987) presents an interesting modification of this process by measuring the quality of students' papers with the following levels of concerns in mind: (HOCs) high order concerns: focus, details, and organization; (MOCs) middle order concerns: style and sentence order; and (LOCs) lower order concerns: mechanics and spelling.
All in all, it appears that true growth in writing is a slow, seldom linear process. Writing teachers have a wide variety of responses they can offer students before making formal evaluations of the text (Harmon 1988).
Measuring, Judging. Urbana: National Council of Teachers of English,
1977, 37-39. ED 143 020
Grant-Davie, Keith and Nancy Shapiro. "Curing the Nervous Tick:
Reader-Based Response to Student Writing." Paper presented at the
Annual Meeting of the Conference on College Composition and
Communication, March 1987. ED 282 196
Harmon, John. "The Myth of Measurable Improvement." English Journal, 77(5)
September 1988, 79-80. EJ 376 076
Hittleman, Daniel R. "Developmental Reading, K-8, Teaching from a
Whole-Language Perspective" 3rd ed. Columbus, OH: Merrill, 1988.
Hyslop, Nancy B. "A Study to Test the Effects of Daily Writing upon
Students' Skills in Explanatory Discourse at the Eleventh Grade
Level." Unpublished dissertation, 1983.
Krest, Margie. "Time on My Hands: Handling the Paper Load." English
Journal, 76(8) December 1987, 37-42. EJ 367 295
National Standards: Oral and Written Communications. Washington Office of
the State Superintendent of Public Instruction, Olympia, 1984. ED 297
Teale, William H. "Developmentally Appropriate Assessment of Reading and
Writing in the Early Childhood Classroom." Elementary School Journal,
89(2) 1988. EJ 382 620
Whitlock, Roger. "Making Writing Groups Work: Modifying Elbow's
Teacherless Writing Groups for the Classroom." Paper presented at
the Annual Meeting of the Conference on College Composition and
Communication, March 1987. ED 284 284 -----
Nancy B. Hyslop has taught writing both at the secondary and the university level, most recently at the University of Evansville.
This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract no. RI88062001. Contractors undertaking such projects under government sponsorship are encouraged to express freely their judgment in professional and technical matters. Points of view or opinions, however, do not necessarily represent the official view or opinions of the Office of Educational Research and Improvement
Title: Evaluating Student Writing: Methods and Measurement. ERIC Digest.
Descriptors: Elementary Secondary Education; * Evaluation Methods; Higher Education; Instructional Effectiveness; * Student Evaluation; Teacher Role; * Writing Evaluation; * Writing Instruction; Writing Processes; Writing Research; * Writing Teachers
Identifiers: ERIC Digests
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at