|From the ERIC database
Steps in Designing an Indicator System. ERIC/TM Digest.
The development of even a single indicator is an iterative process that de Neufville (1975) estimates takes about ten years to complete. The process is time-consuming because indicators are developed in a policy context; thus, their interpretation goes beyond the traditional canons of science and enters the realm of politics (cf. de Neufville, 1978-79). With this caveat, we can enumerate some steps to identify an initial set of indicators and to develop alternative indicator systems.
CONCEPTUALIZE POTENTIAL INDICATORS
o inputs (the human and financial resources available to the education system),
o processes (a set of nested systems that create the educational environment that children experience in school, e.g. school organization, curriculum quality), and
o outputs (the consequences of schooling for students from different backgrounds).
For each of these components, we identified a large potential pool of constructs for which indicators might be developed. Each construct appeared to be either an important enabling condition (e.g., it moderated the link between an input or process indicator and an outcome indicator) or to have a direct link to the desired outcomes of mathematics and science education.
REFINE THE INDICATOR POOL
We applied eight criteria derived from our working definition of indicators. We assumed that indicators should: @1. reflect the central features of mathematics and science education, @2. provide information pertinent to current or potential problems, @3. measure factors that policy can influence, @4. measure observed behavior rather than perceptions, @5. be reliable and valid, @6. provide analytical links, @7. be feasible to implement, and @8. address a broad range of audiences.
These criteria were used to select indicators that reflect the major components of schooling, are reliable and valid (to some minimal extent), and meet basic standards of usefulness to the policy community. These measures then became the core around which different indicator system options were generated.
Applying these criteria may produce some casualties. For example, some highly desirable indicators may have to be eliminated because they cannot be measured reliably. This exercise suggests that some potential indicators which are not sufficiently developed to be included in an indicator system at this time are critical to a better understanding of mathematics and science education and should be part of a developmental research agenda. After these indicators meet our criteria, they can be incorporated into the indicator system.
DESIGN ALTERNATIVE INDICATOR SYSTEM OPTIONS
EVALUATE THE OPTIONS
and curriculum quality), @2. describe those trends state by state, @3. identify problems emerging on the horizon, @4. link teacher and curriculum quality to achievement, thus enabling
policymakers to target reforms, @5. enable the sponsor to provide leadership by monitoring curricular
and achievement areas that are currently ignored.
BEGIN DEVELOPING OR REFINING INDIVIDUAL INDICATORS
The advantages and disadvantages of each major potential indicator in the model must be evaluated, using currently available data and analyses. Systematically synthesizing and contrasting information from a variety of databases will allow the usefulness of current indicators to be assessed and will lay the groundwork for developing and implementing new indicators.
Many data collection efforts and analyses will fall short of indicator requirements. Some of the most important potential indicators may not be measured at all, and well-known difficulties with existing datasets are likely to constrain the analyses that indicators require. In many cases, sample sizes or designs will not be adequate for disaggregating data by groups of interest; some will not permit relational analyses among various components of the system. It is important to identify the shortcomings in existing data and analyses, and where these gaps and inconsistencies exist, to specify what work is needed to obtain reliable, valid, and useful indicators.
It is therefore necessary to identify a research agenda directed toward improving an indicator system. This agenda should become a research component of the indicator system itself that enables researchers to piggyback on monitoring activities and test alternatives to indicators currently in use. With increasing confidence in research findings, new indicator technologies can be incorporated into the system.
de Neufville, J.I. (1978-79). Validating policy indicators, Policy Sciences, 10, 171-188.
Shavelson, R.J., L.M. McDonnell, J. Oakes (eds, 1989). Indicators for Monitoring Mathematics and Science Education: A sourcebook. Santa Monica: RAND Corporation. This digest was adapted from material appearing in the sourcebook.
This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education under contract number RI88062003. The opinions expressed in this report do not necessarily re flect the position or policies of OERI or the Department of Education
Title: Steps in Designing an Indicator System. ERIC/TM Digest.
Descriptors: * Data Collection; Educational Assessment; Educational Policy; Elementary Secondary Education; * Evaluation Criteria; Evaluation Methods; Formative Evaluation; * Management Information Systems; * Mathematics Education; Research Methodology; Research Needs; * Science Education; * Systems Development
Identifiers: *Educational Indicators; ERIC Digests; Monitoring
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at