>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

 


ERIC Documents Database Citations & Abstracts for Meta Evaluation in Educational Program Evaluation


Instructions for ERIC Documents Access

Search Strategy:
Meta Evaluation [as an ERIC indexed term] or meta evaluation [as a title word]
OR
meta evaluation [as a free-text word] AND (Evaluation Methods or Evaluation Utilization) [as major ERIC descriptors]

  ED422401  TM028963
  Evaluating the Evaluators: The External Evaluator's Perspective.
  Hansen, Joe B.
  1998
  13p.; Paper presented at the Annual Meeting of the American 
Educational Research Association (San Diego, CA, April 13-17, 1998).
  Document Type: EVALUATIVE REPORT (142);  CONFERENCE PAPER (150)
  The question of who evaluates the evaluators is explored through 
the experiences of an external evaluation team.  Some have called 
evaluating evaluators and their work evaluation auditing, but it 
could also be viewed as a form of meta-evaluation.  At the request of 
the Director of Research and Evaluation for the "ESU 18" (named for a 
county administrative unit) evaluation team in Lincoln (Nebraska), 
three directors of research and evaluation in other school districts 
(DREs) formed an external visitation team (EVT) to conduct a meta-
evaluation of the Evaluation Team of the initiative (ET).  The ET has 
a unique relationship to the Lincoln Public Schools because it is 
housed in the same building but is administratively separate, 
reporting to a county administrative unit.  This gives the ET the 
advantage of being independent, and less subject to pressure to 
conduct evaluations that create positive public relations for the 
school district.  By the same token, their independence lends 
credibility to their studies and reports.  Challenges exist, at least 
theoretically, in that the program staff members who receive the 
evaluations are under little obligation to pay attention to them.  
This results in a need for the ET to work closely with school 
district staff to demonstrate the value of their contribution to 
instruction program quality.  This they have accomplished admirably.  
The Lincoln Public School staff viewed the ESU18 evaluators as 
intelligent, thoughtful, creative problem solvers and facilitators.  
These evaluators were seen as adding value to the educational system.  
Results also show that using the EVT to conduct an evaluation of the 
evaluators is both viable and useful.  (SLD)
  Descriptors: Educational Change; Elementary Secondary Education; 
*Evaluation Methods; *Evaluators; *Program Evaluation; *School 
Districts
  Identifiers: External Evaluation; *Lincoln Public Schools NE; *Meta 
Evaluation


  EJ554815  TM520659
  Evaluating Evaluation in the European Commission.
  Levy, Roger P.
  Canadian Journal of Program Evaluation/La Revue canadienne 
d'evaluation de programme, v12 n1 p1-18 Sum   1997
  ISSN: 0834-1516
  Document Type: JOURNAL ARTICLE (080);  EVALUATIVE REPORT (142)
  In recent years, the European Commission has undertaken a number of 
initiatives to improve evaluation in the European Union.  This 
article reviews these initiatives and focuses on the work of the 
Commission's evaluation study group.  The future of evaluation in the 
Commission is discussed in the context of management of change. (SLD)
  Descriptors: *Administration; *Change; *Evaluation Methods; 
Evaluation Utilization; Foreign Countries; *Program Evaluation; 
*Research Methodology
  Identifiers: *European Communities Commission; European Union


  ED403314  TM026049
  The Development, Validation, and Applicability of "The Program 
Evaluation Standards: How To Assess Evaluations of Educational 
Programs."
  Gould, R. Bruce; And Others
  Aug 1995
  26p.
  Document Type: EVALUATIVE REPORT (142)
  The work done by the Validation Panel that was commissioned by the 
Joint Committee on Standards for Educational Evaluation (Joint 
Committee) to monitor the development of "The Program Evaluation 
Standards: How To Assess Evaluations of Educational Programs" is 
described, and its conclusions summarized.  This report focuses on 
the development process, the assumptions underlying the effort, and 
the applicability of the "Standards" in different contexts.  Revision 
of the "Standards" had begun at the Joint Committee's 1990 meeting.  
An early decision was made to include a standard for meta-evaluation 
that required that the evaluation itself be formatively and 
summatively evaluated.  True to this new standard, the Joint 
Committee commissioned a Validation Panel to perform that meta-
evaluation function during the development of the revised 
"Standards." The developed "Standards" consist of 30 specific 
standards grouped into categories of utility, feasibility, propriety, 
and accuracy.  Although no explicit statements of guiding assumptions 
are included in the "Standards," a number of implicit assumptions 
center on the need for educational program evaluation standards and 
the possibility of agreement about such standards.  Representatives 
of the 15 organizations that comprise the Joint Committee considered 
the results of expert commentary, testimony at public hearings, and 
field tests in approving the development process.  The position is 
taken that the development of the "Standards" was very systematic and 
open, and likely resulted in a set of standards that represent the 
state of the art in educational program evaluation.  (Contains nine 
references.) (SLD)
  Descriptors: *Evaluation Methods; Formative Evaluation; Program 
Development; *Program Evaluation; *Standards; Summative Evaluation; 
*Test Construction
  Identifiers: *Joint Committee on Standards for Educ Evaluation

  
  EJ483679  IR528585
  Assessing the Quality of Training Evaluation Studies.
  Basarab, David J., Sr.
  Performance and Instruction, v33 n3 p19-22 Mar   1994
  ISSN: 0884-1985-AE
  Document Type: POSITION PAPER (120);  PROJECT DESCRIPTION (141);  
JOURNAL ARTICLE (080)
  Outlines a procedure which training professionals and business 
managers can use to assess the effectiveness of their corporate 
evaluation studies.  Meta-evaluation is explained; and phases in the 
training evaluation process, including plan, develop, obtain, 
analyze, and report, are described based on experiences at Motorola.  
(Contains two references.) (LRW)
  Descriptors: Audience Analysis; *Evaluation Methods; *Evaluation 
Research; *Industrial Training; *Training Methods
  Identifiers: Motorola Inc


  EJ498468  TM518493
  Assessing Highly Accomplished Teaching: Developing a Metaevaluation 
Criteria Framework for Performance-Assessment Systems for National 
Certification of Teachers.
  Nyirenda, Stanley
  Journal of Personnel Evaluation in Education, v8 n3 p313-27 Oct 
  1994
  ISSN: 0920-525X
  Document Type: EVALUATIVE REPORT (142);  JOURNAL ARTICLE (080)
  This article attempts to outline the issues and to describe the 
process of developing a metaevaluation framework for assessing the 
quality, efficiency, and effectiveness of the performance-assessment 
instruments being created by the National Board for Professional 
Teaching Standards.  The metaevaluation framework consists of a set 
of evaluation criteria and guidelines.  (SLD)
  Descriptors: *Criteria; *Educational Assessment; Evaluation 
Utilization; *Guides; Instructional Effectiveness; *Licensing 
Examinations (Professions); Standards; Teacher Certification; 
*Teacher Evaluation; Test Construction
  Identifiers: *Meta Evaluation; National Board for Professional 
Teaching Standards; *Performance Based Evaluation


  EJ485739  TM517937
  Meta-Evaluation of School Evaluation Models.
  Gallegos, Arnold
  Studies in Educational Evaluation, v20 n1 p41-54   1994
  Theme issue titled "Special Issue on CREATE's Work in Educational 
Evaluation."
  ISSN: 0191-491X
  Document Type: REVIEW LITERATURE (070);  EVALUATIVE REPORT (142);  
JOURNAL ARTICLE (080)
  The Center for Research on Educational Accountability and Teacher 
Evaluation (CREATE) completed a meta-evaluation of school evaluation 
models in 1992.  The procedures, categories, standards, and criteria 
of this study, which reviewed 51 models, are described.  The results 
of the study, which reflect broadened dimensions of school 
evaluation, are summarized.  (SLD)
  Descriptors: Accountability; Classification; Criteria; *Educational 
Research; Elementary Secondary Education; *Evaluation Methods; *Meta 
Analysis; *Models; Program Evaluation
  Identifiers: *Center Res Educational Accountability Teacher Eval; 
CREATE Program


  ED358648  EC302212
  Special Education Program Evaluation: A Planning Guide. An 
Overview. CASE Commissioned Series.
  McLaughlin, John A.
  Council of Administrators of Special Education, Inc.; Indiana 
Univ., Bloomington.  May 1988
  110p.
  Available From: CASE Research Committee, Indiana University, School 
of Education, Smith Research Center-100A, 2805 E. 10th St., 
Bloomington, IN 47405 (Order No. PES-2, $15).
  Document Type: NON-CLASSROOM MATERIAL (055)
  This resource guide is intended to help in planning special 
education program evaluations.  It focuses on: basic evaluation 
concepts, identification of special education decision makers and 
their information needs, specific evaluation questions, procedures 
for gathering relevant information, and evaluation of the evaluation 
process itself.  Preliminary information discusses the nature of 
evaluation, the people involved, and ways to maximize the utilization 
of evaluation results.  Then, the following eight steps to planning a 
local evaluation are detailed: (1) getting started; (2) describing 
the program; (3) writing evaluation questions; (4) planning 
collection of information; (5) planning analysis of evaluation data; 
(6) planning the evaluation report; (7) managing the evaluation; and 
(8) meta evaluation.  Four appendices provide a meta evaluation 
checklist, a list of 8 references on evaluation utilization, a list 
of 11 specific strategies to enhance evaluation utilization, and 15 
worksheets keyed to the 8 planning steps.  (DB)
  Descriptors: Data Collection; *Disabilities; Elementary Secondary 
Education; *Evaluation Methods; *Evaluation Utilization; Information 
Sources; *Planning; Program Effectiveness; *Program Evaluation; 
*Special Education
  Identifiers: *Evaluation Reports


  ED282908  TM870319
  In-House Evaluation--Navigating the Minefield.
  Fein, Edith; And Others
  Apr 1987
  18p.; Paper presented at the Annual Meeting of the American 
Evaluation Association (Kansas City, MO, October 29-November 1, 
1986).
  Document Type: EVALUATIVE REPORT (142);  CONFERENCE PAPER (150)
  This paper describes experiences with in-house evaluation, using 
four case examples from Child and Family Services (a social service, 
child welfare, and mental health agency) in a meta-evaluation model 
to illustrate benefits and sensitivities of the internal evaluator's 
role.  Projects reviewed were: (1) a child sexual abuse treatment 
team; (2) a family day care project; (3) agency policy in response to 
the Tarasoff decision; and (4) a new performance appraisal system.  
Meta-evaluations examine variables not originally included in the 
evaluation design and enable the researcher to consider unintended as 
well as intended consequences when examining program outcomes.  Meta-
evaluation may be conducted to assert the relevance of in-house 
research and evaluation efforts and can be used to document the 
utility of research activities.  It can also point out where the 
evaluator could have been more successful.  Before conducting a meta-
evaluation, it is necessary for the evaluator to consider several 
issues: (1) timing (the length of time that should elapse between 
completion of an evaluation and the start of the meta-evaluation); 
(2) unintended consequences (program evaluations can have both 
negative and positive unintended consequences); (3) other functions 
(evaluators may develop additional programs, information, etc.); (4) 
objectivity of the meta-evaluation; (5) costs of the evaluation and 
the meta-evaluation; and (6) technical and procedural concerns 
(evaluators' work should be of the highest standard).  In-house 
evaluation can make essential contributions to program planning and 
development.  It may be appropriate for in-house evaluators to do 
more meta-evaluations to examine the outcomes and provide models for 
others.  (BAE)
  Descriptors: Adults; Case Studies; Early Childhood Education; 
*Evaluation Methods; *Evaluators; Family Programs; Formative 
Evaluation; Power Structure; Private Agencies; *Program Development; 
*Program Evaluation; Research Design
  Identifiers: *Meta Evaluation

  
  EJ325994  TM510846
  Organizing Evaluations for Use As a Management Tool.
  Burry, James; And Others
  Studies in Educational Evaluation, v11 n2 p131-57   1985
  Theme Issue with title "Evaluation as a Mangement Tool."
  Document Type: JOURNAL ARTICLE (080);  REVIEW LITERATURE (070)
  Factors associated with the use of program evaluation results are 
examined.  Evaluation can serve a variety of educational management 
needs if (1) these needs are organized around a central concern, and 
if (2) stakeholders use evaluation information so that their decision 
making resolves the central concern.  (GDC)
  Descriptors: *Administrator Role; Decision Making; Educational 
Administration; Elementary Secondary Education; Evaluation Methods; 
*Evaluation Utilization; Evaluators; Higher Education; *Information 
Needs; Interprofessional Relationship; Literature Reviews; Predictor 
Variables; *Program Evaluation
  Identifiers: Meta Evaluation

  
  EJ320585  TM510704
  A Systems Approach to the Analysis and Management of Large-Scale 
Evaluations.
  Scheerens, Jaap
  Studies in Educational Evaluation, v11 n1 p83-93   1985
  Document Type: JOURNAL ARTICLE (080);  NON-CLASSROOM MATERIAL (055)
  In order to better understand the influence of the organizational 
setting on evaluation, this conceptual framework was developed and 
tried out in a meta evaluation of innovatory educational programs in 
Holland.  Four components are explained--contigency factors, 
organization structure, policy-making, and evaluation research--and a 
checklist is presented.  (GDC)
  Descriptors: Evaluation Criteria; *Evaluation Methods; *Evaluation 
Utilization; *Meta Analysis; Models; Organizational Climate; 
*Politics of Education; Program Evaluation; Summative Evaluation; 
*Systems Approach
  Identifiers: Control Theory; Evaluation Problems; Evaluation 
Research; *Meta Evaluation


 EJ309343  TM510211
  Evaluation Synthesis for the Legislative User
  Chelimsky, Eleanor; Morra, Linda G.
  New Directions for Program Evaluation, n24 p75-89 Dec 
  1984
  Theme issue with title "Issues in Data Synthesis."
    Document Type: JOURNAL ARTICLE (080);  PROJECT DESCRIPTION (141)
  Evaluation synthesis, a meta-evaluation method developed by the 
General Accounting Office to improve evaluation utilization by 
legislators, is described.  It is designed to: (1) focus on the 
information needs and priorities of legislators, and (2) ensure 
continuous communication between sponsor and evaluator.  The 
strengths and limitations of this approach are discussed.  (BS)
  Descriptors: Evaluation Methods; *Evaluation Utilization; *Federal 
Programs; Information Needs; *Legislators; *Meta Analysis; 
Organizational Communication; *Program Evaluation; Research 
Methodology
  Identifiers: Evaluation Research; *General Accounting Office

  
  EJ290805  TM508496
  Metaevaluation.
  Nilsson, Neil; Hogben, Donald
  New Directions for Program Evaluation, n19 p83-97 Sep 
  1983
  Document Type: POSITION PAPER (120)
  The authors criticize the value-free notion of social science and 
evaluation.  They particularly assail relativists, those who confuse 
the making of reliable value judgments with how these value judgments 
are used.  (Author/PN)
  Descriptors: *Evaluation Methods; *Evaluation Needs; Research Needs; 
Research Problems; Scientific Principles; *Standards; *Theories
  Identifiers: *Meta Evaluation; *Metatheory; Relativism


  EJ270544  TM507350
  Follow Through: A Case Study in Meta-Evaluation Research.
  St. Pierre, Robert G.
  Educational Evaluation and Policy Analysis, v4 n1 p47-55 Spr 
  1982
  Available From: Reprint: UMI
  Document Type: JOURNAL ARTICLE (080);  EVALUATIVE REPORT (142)
  This paper shows how several of Cook and Gruder's meta-evaluation 
models have been applied in the national evaluation of Project Follow 
Through.  Participants in the evaluation are identified, their roles 
are described, and the degree to which these roles map onto the meta-
evaluation models is assessed.  (Author/BW)
  Descriptors: *Compensatory Education; Elementary Education; 
Evaluators; Federal Programs; *Models; Participant Characteristics; 
*Program Evaluation; *Research and Development Centers; *Role 
Perception
  Identifiers: *Meta Evaluation; *Project Follow Through

  
  ED229837  EA015614
  The 'I and We' of Accountability: An Example of Meta-Evaluation in 
Educational Administration.
  Macpherson, R. J. S.
  20 Jun 1982
  20p.; Paper presented at the Annual State Conference of the 
Victorian Council of Educational Administration (Melbourne, Victoria, 
Australia, June 1982).
  Document Type: CONFERENCE PAPER (150);  RESEARCH REPORT (143)
  The process of school review currently used in Victoria (Australia) 
involves an internal school evaluation, a School Review Board visit 
and report, and a followup stage.  Six months after the board had 
visited one high school, a retrospective view of the review process 
was gathered from 8 students, 9 parents, 24 teachers, and 5 
administrators.  As a validity check, respondents then commented on 
the multiple perspectives represented.  The administrative 
intervention was intended as a school improvement strategy; however, 
metaevaluation of the realities experienced by those reviewed 
suggests that the process was a ritual to maintain illusions of power 
at all levels.  The review failed to facilitate the development of 
teaching, learning, administration, or governance.  Only some of the 
people's attitudes changed, mostly in short-term, counter-productive 
ways.  (MLF)
  Descriptors: *Accountability; Educational Administration; 
*Educational Assessment; Educational Improvement; *Evaluation Methods; 
Foreign Countries; *Institutional Evaluation; Secondary Education
  Identifiers: *Australia (Victoria); *Meta Evaluation


  ED228280  TM830156
  Meta-Analysis, Meta-Evaluation and Secondary Analysis.
  Martin, Paula H.
  Oct 1982
  37p.
  Document Type: REVIEW LITERATURE (070)
  Meta-analysis, meta-evaluation and secondary analysis are methods 
of summarizing, examining, evaluating or re-analyzing data in 
research and evaluation efforts.  Meta-analysis involves the 
summarization of research findings to come to some general 
conclusions regarding effects or outcomes of a given 
treatment/project/program.  Glass's approach standardizes various 
effect measures and controls for these in analyzing data.  Meta-
evaluation is a method of evaluation research examining evaluation 
methodologies, procedures, data analysis techniques, interpretation 
of results, and the validity and reliability of conclusions.  
Secondary analysis, as defined by Glass, is, "the re-analysis of data 
for the purpose of answering the original research question with 
better statistical techniques or answering new questions with old 
data." A review of the literature related to these methodologies 
gives examples of actual studies using these techniques.  Specifics 
on meta-evaluation in federally funded bilingual education programs 
illustrate the methodology.  As current budgetary cutbacks affect 
state and federal programs, meta-analysis and meta-evaluation are 
assuming important roles.  (CM)
  Descriptors: Bilingual Education Programs; *Educational Research; 
Elementary Secondary Education; Federal Programs; *Program Evaluation; 
*Research Methodology; *Statistical Analysis; *Statistical Data
  Identifiers: Elementary Secondary Education Act Title VII; Glass (G 
V); *Meta Analysis; *Meta Evaluation; Secondary Analysis


  ED218352  TM820394
  Biases: Threats to Validity in Evaluation Models.
  Innes, Allison H.
  Mar 1982
  17p.; Paper presented at the Annual Meeting of the American
Educational Research Association (66th, New York, NY, March 19-23,
1982).
  Document Type: CONFERENCE PAPER (150);  BIBLIOGRAPHY (131);  REVIEW 
LITERATURE (070)
  Potential sources of invalidity relevant to evaluation approaches 
are identified, described, and categorized.  The evaluation focus 
covered is the determination of a program's worth, not the 
determination of the presence of cause-and-effect relationships.  
Concern has been expressed by evaluators about the presence and 
effect of biases and their possible threat to the validity of the 
conclusions.  Validity threats that are discussed in articles in 
psychology, sociology, and related fields, and were also mentioned in 
evaluation literature are considered.  A list of 15 potential 
validity threats was compiled.  From their psychological and 
sociological nature, these biases are likely to occur to some degree 
when using most evaluation approaches.  Decision-oriented, client-
oriented, and connoisseur-based studies are susceptible to more of 
these biases than policy, accountability, or objectives-based studies.  
The author suggests that knowledge of these 15 sources of invalidity 
could facilitate less biased evaluative judgments.  That is, if 
during a formative evaluation it is discovered that there is a bias 
present, such as relativity of data, then alternative ways of 
gathering data may be introduced, or program criteria may be 
reassessed.  This checklist of biases could prove a useful tool in a
meta-evaluation.  (Author/PN)
  Descriptors: *Bias; Evaluation Criteria; Formative Evaluation; 
*Program Evaluation; *Validity
  Identifiers: *Evaluation Problems; Meta Evaluation

 
  ED199301  TM810352
  "Net Benefit," A Neglected Metaevaluation Criterion.
  Lai, Morris K.
  Apr 1981
  10p.
  Document Type: CONFERENCE PAPER (150);  POSITION PAPER (120);  
EVALUATIVE REPORT (142)
  Evaluation has generally not been accountable in terms of its 
promoting a "net-benefit." The term "net-benefit" rather than 
"benefit" is used because a given amount of legitimate benefit may 
come at the expense of an inordinate expenditure of evaluation 
resources or energy.  If any aspect of an evaluation is unlikely to 
provide any net-benefit to humanity as far as the overall evaluation 
is concerned, then it probably shouldn't be done given the relative 
scarcity of evaluation resources and energy.  Examples of net-
nonbeneficial energy-wasting evaluation activities include: (1) 
carrying out overly complex statistical analyses; (2) dissemination 
and use (or lack thereof) of the draft version of the Standards 
produced by the Committee on Standards for Educational Evaluation 
which forbade the draft version to be cited, duplicated, or 
distributed without written permission of the Chairman of the 
Standards Committee; and (3) the publication of articles including 
excess spending of time in adhering to style guidelines.  Determining 
whether an evaluation activity has the potential to lead to net-
benefit is clearly not always an easy task, but it is an effort 
toward achieving accountability.  (RL)
  Descriptors: *Accountability; *Educational Assessment; *Evaluation 
Criteria
  Identifiers: *Meta Evaluation; *Net Benefit

  
  EJ180565  TM503237
  Metaevaluation Research
  Cook, Thomas D.; Gruder, Charles L.
  Evaluation Quarterly, 2, 1, 5-51  Feb 1978
  Four projects aimed at evaluating the technical quality of recent 
summative evaluations are discussed, and models of metaevaluation are 
presented.  Common technical problems are identified and practical 
methods for solving these problems are outlined, but these methods 
are limited by the current state of the art.  (Author/CTM)
  Descriptors: Consultants; *Data Analysis; *Evaluation Methods; 
Evaluators; *Program Evaluation; Research Methodology; Research 
Problems; Research Reports; *Research Reviews (Publications); 
*Summative Evaluation
  Identifiers: *Meta Evaluation; Secondary Analysis

  
  ED163057  TM008109
  A Checklist for Evaluating Large-Scale Assessment Programs. Paper 
#9 in Occasional Paper Series.
  Shepard, Lorrie A.
  Western Michigan Univ., Kalamazoo. School of Education.
  Apr 1977
  60p.
  Sponsoring Agency: Carnegie Corp. of New York, N.Y.
  Document Type: RESEARCH REPORT (143)
  A checklist with five major categories is presented and discussed 
for its use in planning and carrying out an assessment program.  
Three other outlines are briefly presented for comparison: 
Stufflebeam's Meta-Evaluation Criteria; Scriven's Checklist for 
Evaluating Products, Producers, and Proposals; and Stake's Table of 
Contents for a Final Evaluation Report.  Preparatory activities for 
assessment include staffing the evaluation team, defining the 
purpose, and identifying appropriate strategies for data collection.  
The first category of the checklist, goals and purposes, includes a 
number of different kinds of goals as well as criteria for judging 
goals.  The technical aspects category includes several items about 
tests and also includes sampling, test administration, and reporting.  
There are subtopics under the heading of management such as planning 
and personnel.  The category dealing with intended and unintended 
effects has two headings: people and groups who may be affected, and 
kinds of effects.  The final category deals with dollar costs, costs 
in time, and possible negative effects of the assessment program.  
Finally, it is suggested that the results should be synthesized and 
contrasted with plausible alternatives.  (CTM)
  Descriptors: *Check Lists; *Educational Assessment; Evaluation; 
*Evaluation Criteria; *Evaluation Methods; *Program Evaluation; 
*Summative Evaluation
  Identifiers: *Meta Evaluation

  
  ED137417  TM006226
  Evaluating Evaluation.
  Matuszek, Paula; Lee, Ann
  Austin Independent School District, Tex. Office of Research and 
Evaluation.  [Apr 1977
  23p.; Paper presented at the Annual Meeting of the American 
Educational Research Association (61st, New York, New York, April 4-
8, 1977)
  Document Type: RESEARCH REPORT (143)
  The various needs for evaluating evaluators and their efforts are 
discussed in this paper.  The argument is presented that evaluators 
should not themselves carry out summative evaluation on their own 
efforts.  Several possible purposes of evaluation of evaluation 
staffs and products are pursued, and the methods and persons most 
appropriate to each purpose are described.  Planning an evaluation of 
evaluation to best meet the needs of evaluators is also discussed.  
(Author/MV)
  Descriptors: Decision Making; *Educational Researchers; Evaluation; 
Evaluation Criteria; *Evaluation Methods; Evaluation Needs; 
*Evaluators; *Personnel Evaluation; Questionnaires; *Self Evaluation; 
Surveys
  Identifiers: *Meta Evaluation

  
  ED090319  TM003615
  Toward a Technology for Evaluating Evaluation.
  Stufflebeam, Daniel L.
  Apr 1974
  103p.; Paper presented at the American Educational Research 
Association Annual Meeting (Chicago, Illinois, April 15-19, 1974)
  The aim of this paper is to present a logical structure for the 
evaluation of evaluation (meta-evaluation) and to suggest ways of 
conducting such evaluations.  Part I contains an analysis of 
background factors and problems associated with meta-evaluation--that 
is, the evaluation of evaluation.  This part discusses the need for 
meta-evaluation and summarizes some of the pertinent literature.  
Suggestions are made concerning what criteria should guide the 
development of a meta-evaluation methodology.  The final and major 
portion of Part I is an enumeration of 6 classes of problems that 
jeopardize methodology.  The second part of the paper is a conceptual 
response to the first part.  Part II contains a definition of meta-
evaluation and a set of premises to undergird a conceptualization of 
meta-evaluation.  Most of part two is devoted to a logical structure 
for designing meta-evaluation studies.  The third part of the paper 
is an application of the logical structure presented in Part II.  
Basically Part III contains 5 meta-evaluation designs.  Four of the 
designs are for use in guiding evaluation work, and the fifth is used 
in judging completed evaluation work.  Taken together the three parts 
of the paper are intended to provide a partial response to the needs 
for conceptual and practical developments of meta-evaluation.  
(Author/MLP)
  Descriptors: Accountability; Definitions; *Evaluation; *Evaluation 
Criteria; *Evaluation Methods; Evaluation Needs; Guides; *Models; 
Technology
  Identifiers: *Meta Evaluation

  
  EJ003653  EF500032
  An Introduction to Meta-Evaluation
  Scriven, Michael
  Educational Product Report, No. 2, 5, pp.36-38  1969 Feb
  1969
  Descriptors: Criteria; *Data Analysis; Equipment Evaluation; 
*Evaluation Methods; *Evaluation Needs; Program Evaluation

Return to FAQ on Meta Evaluation in Educational Program Evaluation

Return to the Index of FAQs


Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership