>
|
|
From the ERIC database
Evaluating Educational Programs. ERIC Digest Series Number EA 54.Program evaluation has long been a useful technical tool for determining if programs are meeting their stated goals. Specialists submit reports that help administrators to decide changes in curriculum content or direction. In recent years program evaluators have taken on an expanded role because their experience can be of value in every stage of the development of the program. This Digest introduces the reader to the scope of evaluation and the changing roles evaluators are asked to play in the school district.
HOW ARE EDUCATIONAL PROGRAMS EVALUATED? Three categories of instructional program evaluation are described by Bruce Wayne Tuckman (1985). "Formative evaluation" is an internal function that feeds results back into the program to improve an existing educational unit; this kind of evaluation is used frequently by teachers and school administrators to compare outcomes with goals. Attainment can be measured and procedures modified over time. "Summative evaluation" exists for the purpose of demonstration and documentation. Various ways of achieving similar goals can be compared. Summative evaluations help school districts analyze their unique characteristics and choose the program that will best achieve their pedagogical goals. An example is the evaluation of the adaptability and success in the work force of students who have emerged from a program. "Ex post facto evaluation" is a study over time. It attempts to determine if new programs, launched without readily predictable results, are achieving the desired goals. Here the data generated by continuous analysis are compared over time and, when available, compared with data of similar pilot programs. Both longitudinal (comparison of results over time) and cross-sectional (comparison of different student groups) results give evaluators the data to recommend improvement or termination.
HOW HAVE NONTRADITIONAL MEASUREMENTS AFFECTED PROGRAM EVALUATION? However, standardized testing involves a plethora of statistical uncertainties that have led some program evaluators to adopt other techniques to measure student attainment. Several alternative testing methods are being used: (1) standardized interviews allow students' responses to be compared and summarized; (2) direct tests (sometimes verbal) such as reading and math demonstrations enable teachers to gauge strengths and weaknesses and determine competency beyond mere right and wrong answers; and (3) students' notes, art work, and other material can be inspected for evidence of mastery. Edward F. DeRoach (1988) thinks that relying on an array of achievement, literacy, and minimal competency testing overemphasizes cognitive- achievement factors while disregarding affective-aesthetic development. He suggests using a program evaluation profile that reveals less tangible values such as: (1) program description that evaluates the nature of the community and the cultural/occupational background of parents; (2) program objectives that would measure performance in American history, for example, by involvement in school political activity or community service; (3) program content that ranges from knowledge of the facts to facility with placing information in larger contexts; and (4) processes that measure listening, questioning, summarizing, solving, and creating skills, as well as social skills such as tolerance, respect, and fairness to others. It remains unclear whether such "performance-based" assessments can be usefully compared across wide-ranging student populations.
HOW DOES COMMUNITY AND SCHOOL BOARD INPUT AFFECT PROGRAM EVALUATION? The study concluded that citizen judgments must be used judiciously to avoid bias, but that such judgments can be predictive of community responsiveness and receptivity to future collaboration. Program evaluators have paid more attention to political factors in recent years as evaluation has become a stronger force in program design. Hence, attention to public sentiment needs to be a high priority.
HOW DO ADMINISTRATORS VIEW EVALUATIONS? In small schools, the missing element in evaluations seems to be the attempt to make such studies systematic, purposive, cyclical, comprehensive, and well-communicated (James R. Sanders 1988). Sanders suggests that a Program Review Committee (PRC), composed of the superintendent, principal, grade level chairperson, and an educational specialist, be established. Each year the committee should conduct a review of one or two programs, so that each program receives careful scrutiny once every five years.
WHAT ARE THE NEW ROLES FOR EVALUATORS? Another new role for the evaluator is translating policy questions developed by school boards and legislators into the more precise questions of program evaluation. In this role, the evaluator helps fashion new and innovative programs with features that are readily measurable. Once pilot programs are begun, the evaluator then has the opportunity to determine how fully the program was implemented before evaluating its effectiveness. According to Fitzpatrick, evaluation questions imply certain design decisions. Besides content, these questions can help determine the parameters of cost, time, and the availability of professional personnel. The program manager can monitor the innovative program through the oral briefings and written reports of the program evaluator. To be effective, communication should be ongoing and not limited to a final report at the end of the year. This makes the reporting of evaluation findings to the state-level policy makers more sensitive and precise. Thus, the use of an evaluator as program partner is effective at every stage of program development for integrating differing levels of understanding and shifts in accountability.
RESOURCES Fitzpatrick, Jody L. "Roles of the Evaluator in Innovative Programs: A Formative Evaluation." Evaluation Review 12,4 (August 1988):449-61. EJ 381 144. Hansen, Joe B., and Walter E. Hathaway. "Setting the Evaluation Agenda: The Policy-Practice Cycle." Paper presented at AERA, New Orleans, LA, April 5-9, 1988. 42 pages. ED 293 862. King, Jean A., and Bruce Thompson. "How Principles, Superintendents View Program Evaluation."n." NASSP Bulletin 67,459 (Jan 1983): 46-52. EJ 274 300. Lazarus, Mitchell. Evaluating Educational Programs. Arlington, VA: American Association of School Administrators, 1982. 79 pages. ED 266 414. Sanders, James R. "Approaching Evaluation in Small Schools." ERIC Digest Series. Las Cruces, NM: ERIC Clearinghouse on Rural Education and Small Schools, 1988. 13 pages. ED 296 816. Smith, Nick L. "Citizen Involvement in Evaluation: Empirical Studies." Studies in Educational Evaluation 9,1 (1983):105-17. EJ 287 582. Tuckman, Bruce Wayne. Evaluating Instructional Programs. 2nd ed. Rockleigh, NJ: Allyn and Bacon, Inc., 1985. 292 pages. ED 261 015. This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract No. OERI RI88062004. The ideas and opinions expressed in this Digest do not necessarily reflect the positions or policies of OERI, ED, or the Clearinghouse. This Digest is in the public domain and may be freely reproduced
Title: Evaluating Educational Programs. ERIC Digest Series Number EA 54. Descriptors: Administrator Role; * Consultants; * Curriculum Evaluation; Elementary Secondary Education; Evaluation Methods; Portfolios [Background Materials]; * Program Evaluation; * Student Evaluation; * Test Validity Identifiers: ERIC Digests http://ericae.net/edo/ED324766.htm |
|
|||
Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at
ericae.net. |