>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

 

From the ERIC database

Evaluating Educational Programs. ERIC Digest Series Number EA 54.

Beswick, Richard

Program evaluation has long been a useful technical tool for determining if programs are meeting their stated goals. Specialists submit reports that help administrators to decide changes in curriculum content or direction.

In recent years program evaluators have taken on an expanded role because their experience can be of value in every stage of the development of the program. This Digest introduces the reader to the scope of evaluation and the changing roles evaluators are asked to play in the school district.

HOW ARE EDUCATIONAL PROGRAMS EVALUATED?
Every area of school curriculum is designed with certain goals in mind. A program evaluation measures the outcome of a program based on its student-attainment goals, level of implementation, and external factors such as budgetary constraints and community support.

Three categories of instructional program evaluation are described by Bruce Wayne Tuckman (1985). "Formative evaluation" is an internal function that feeds results back into the program to improve an existing educational unit; this kind of evaluation is used frequently by teachers and school administrators to compare outcomes with goals. Attainment can be measured and procedures modified over time.

"Summative evaluation" exists for the purpose of demonstration and documentation. Various ways of achieving similar goals can be compared. Summative evaluations help school districts analyze their unique characteristics and choose the program that will best achieve their pedagogical goals. An example is the evaluation of the adaptability and success in the work force of students who have emerged from a program.

"Ex post facto evaluation" is a study over time. It attempts to determine if new programs, launched without readily predictable results, are achieving the desired goals. Here the data generated by continuous analysis are compared over time and, when available, compared with data of similar pilot programs. Both longitudinal (comparison of results over time) and cross-sectional (comparison of different student groups) results give evaluators the data to recommend improvement or termination.

HOW HAVE NONTRADITIONAL MEASUREMENTS AFFECTED PROGRAM EVALUATION?
The first and most important issue in evaluation--how well students achieve mastery of new facts and skills--can often be measured by standardized tests. Verifications of reliability and validity are the litmus tests of these standardized evaluation tools. Reliability is the achievement of consistency in results. Consistency is measured in several ways: by comparing test results over time (giving the same test at intervals), by grade level expectations, and by national percentile rankings. Validity is the degree to which a test actually measures what it claims to measure, that is, the successful appropriation of intended subject matter.

However, standardized testing involves a plethora of statistical uncertainties that have led some program evaluators to adopt other techniques to measure student attainment. Several alternative testing methods are being used: (1) standardized interviews allow students' responses to be compared and summarized; (2) direct tests (sometimes verbal) such as reading and math demonstrations enable teachers to gauge strengths and weaknesses and determine competency beyond mere right and wrong answers; and (3) students' notes, art work, and other material can be inspected for evidence of mastery.

Edward F. DeRoach (1988) thinks that relying on an array of achievement, literacy, and minimal competency testing overemphasizes cognitive- achievement factors while disregarding affective-aesthetic development. He suggests using a program evaluation profile that reveals less tangible values such as: (1) program description that evaluates the nature of the community and the cultural/occupational background of parents; (2) program objectives that would measure performance in American history, for example, by involvement in school political activity or community service; (3) program content that ranges from knowledge of the facts to facility with placing information in larger contexts; and (4) processes that measure listening, questioning, summarizing, solving, and creating skills, as well as social skills such as tolerance, respect, and fairness to others. It remains unclear whether such "performance-based" assessments can be usefully compared across wide-ranging student populations.

HOW DOES COMMUNITY AND SCHOOL BOARD INPUT AFFECT PROGRAM EVALUATION?
The role of citizen judgments in program evaluation was the focus of four studies conducted by the Northwest Regional Educational Laboratory in Portland, Oregon. Nick L. Smith (1983) notes the growing pressure for citizens and their representatives (school boards) to participate in school planning and review activities. Based on the American tradition of local control of education, it is thought that increased parental participation on boards developing new educational philosophies and innovative curricula would make school district programs more responsive to local ideological, economic, and cultural values.

The study concluded that citizen judgments must be used judiciously to avoid bias, but that such judgments can be predictive of community responsiveness and receptivity to future collaboration. Program evaluators have paid more attention to political factors in recent years as evaluation has become a stronger force in program design. Hence, attention to public sentiment needs to be a high priority.

HOW DO ADMINISTRATORS VIEW EVALUATIONS?
For principals and superintendents, the purpose of program evaluation is to provide information to help them make decisions regarding programs. In general, principals feel that the benefits of evaluations are minimal because of their inability to measure program components that are of real importance, or because principals' own proximity to the everyday realities of the educational process gives them what they feel is a better basis for understanding needs and implementing change. Superintendents tend to be more positive about the value of program evaluation. In particular, evaluations that reported deficiencies and discussed possible solutions were highly rated. Second in importance are personal meetings with evaluation personnel.

In small schools, the missing element in evaluations seems to be the attempt to make such studies systematic, purposive, cyclical, comprehensive, and well-communicated (James R. Sanders 1988). Sanders suggests that a Program Review Committee (PRC), composed of the superintendent, principal, grade level chairperson, and an educational specialist, be established. Each year the committee should conduct a review of one or two programs, so that each program receives careful scrutiny once every five years.

WHAT ARE THE NEW ROLES FOR EVALUATORS?
According to Jody L. Fitzpatrick (1988), the job of the evaluator is expanding from technical roles to political and advisory roles. In innovative programs, defined as those still in a research and development phase, evaluators help identify goals and develop a strategy for accomplishing these goals.

Another new role for the evaluator is translating policy questions developed by school boards and legislators into the more precise questions of program evaluation. In this role, the evaluator helps fashion new and innovative programs with features that are readily measurable. Once pilot programs are begun, the evaluator then has the opportunity to determine how fully the program was implemented before evaluating its effectiveness. According to Fitzpatrick, evaluation questions imply certain design decisions. Besides content, these questions can help determine the parameters of cost, time, and the availability of professional personnel.

The program manager can monitor the innovative program through the oral briefings and written reports of the program evaluator. To be effective, communication should be ongoing and not limited to a final report at the end of the year. This makes the reporting of evaluation findings to the state-level policy makers more sensitive and precise. Thus, the use of an evaluator as program partner is effective at every stage of program development for integrating differing levels of understanding and shifts in accountability.

RESOURCES
DeRoche, Edward F. An Administrator's Guide for Evaluating Programs and Personnel: An Effective Schools Approach. Newton, MA: Allyn and Bacon, Inc., 1987. 319 pages. ED 283 242.

Fitzpatrick, Jody L. "Roles of the Evaluator in Innovative Programs: A Formative Evaluation." Evaluation Review 12,4 (August 1988):449-61. EJ 381 144.

Hansen, Joe B., and Walter E. Hathaway. "Setting the Evaluation Agenda: The Policy-Practice Cycle." Paper presented at AERA, New Orleans, LA, April 5-9, 1988. 42 pages. ED 293 862.

King, Jean A., and Bruce Thompson. "How Principles, Superintendents View Program Evaluation."n." NASSP Bulletin 67,459 (Jan 1983): 46-52. EJ 274 300.

Lazarus, Mitchell. Evaluating Educational Programs. Arlington, VA: American Association of School Administrators, 1982. 79 pages. ED 266 414.

Sanders, James R. "Approaching Evaluation in Small Schools." ERIC Digest Series. Las Cruces, NM: ERIC Clearinghouse on Rural Education and Small Schools, 1988. 13 pages. ED 296 816.

Smith, Nick L. "Citizen Involvement in Evaluation: Empirical Studies." Studies in Educational Evaluation 9,1 (1983):105-17. EJ 287 582.

Tuckman, Bruce Wayne. Evaluating Instructional Programs. 2nd ed. Rockleigh, NJ: Allyn and Bacon, Inc., 1985. 292 pages. ED 261 015.

This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract No. OERI RI88062004. The ideas and opinions expressed in this Digest do not necessarily reflect the positions or policies of OERI, ED, or the Clearinghouse. This Digest is in the public domain and may be freely reproduced


Title: Evaluating Educational Programs. ERIC Digest Series Number EA 54.
Author: Beswick, Richard
Publication Year: 1990
Document Type: Eric Product (071); Eric Digests (selected) (073)
Target Audience: Administrators and Practitioners
ERIC Identifier: ED324766
Available from: Publication Sales, ERIC Clearinghouse on Educational Management, University of Oregon, 1787 Agate Street, Eugene, OR 97403 (free; $2.50 postage and handling).
This document is available from the ERIC Document Reproduction Service.

Descriptors: Administrator Role; * Consultants; * Curriculum Evaluation; Elementary Secondary Education; Evaluation Methods; Portfolios [Background Materials]; * Program Evaluation; * Student Evaluation; * Test Validity

Identifiers: ERIC Digests


http://ericae.net/edo/ED324766.htm

Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership