|From the ERIC database
Vocational Education Performance Standards. ERIC Digest No. 96.
A broad consensus for improving vocational education programs has emerged from the educational reform movement aimed at raising the performance of U.S. students in academic subjects. Pending legislation for the reauthorization of the Carl D. Perkins Vocational Education Act (House Bill HR 7 and Senate Bill 1109) mandates the development of performance indicators or standards. In the literature the terms educational indicator, quality indicator, outcome indicator, performance standard, and performance measure are used interchangeably, and there is general agreement that indicators or standards are single or composite statistics that reveal something about the performance or health of an educational system (Asche 1990). According to Asche, "quality or performance indicators have suddenly become the nation's barometer of education wellness" (pp. 3-4).
A number of options for establishing vocational education performance standards are currently under consideration. This ERIC Digest examines some vocational education experiences with outcome measures, describes proposed approaches, and enumerates potential issues and challenges in establishing performance standards for vocational education.
VOCATIONAL EDUCATION OUTCOME MEASURES
LABOR MARKET OUTCOMES
Even if the labor market outcome information could be collected in a valid, unbiased, and accurate manner, many vocational educators would object to such indicators being the sole measure of program effectiveness on the following grounds: (1) adopting placement as the primary criterion ignores the multiple goals of vocational education; (2) a large number of economic and personal factors beyond the control of the vocational education system determine the employment of students; (3) a narrow focus on placement encourages programs to admit only those who can be placed and to concentrate on coaching in job placement and interview skills at the expense of vocational skills; and (4) placement rates and other economic indicators measure the gross effect of participation (total place, total earnings) rather than the net effect (the difference between labor market outcomes that occurred when students participated in vocational education versus what would have occurred had these programs not existed) (ibid.).
Although there are several ways to measure learning outcomes, the most common method employed in vocational education is occupational competency testing designed to assess mastery of skills (tasks) and knowledge found in specific jobs. Even though competency testing has not played a role in federal vocational education policy, states have been quite active in developing and using competency tests (Hoachlander, Choy, and Brown 1989).
Occupational competency tests provide important indications of program effectiveness, but many vocational educators do not think they should be used as the sole basis for performance standards for the following reasons: test scores often reflect economic and social factors in addition to abilities developed through education and training; schools may coach students on test-taking strategies and on specific test items at the expense of teaching the skills and knowledge purportedly measured by the test; tests do not necessarily indicate how a person would perform at work but instead measure the upper limit of what an individual can do; and at a time when there is a growing consensus that more broadly applicable generic skills are needed in the workplace, use of competency tests could encourage emphasis on highly specialized skills (OTA 1989).
First, when enrollments are used to monitor access, attention is deflected from the primary policy concern that these students acquire the skills necessary to compete effectively in the labor market. Second, monitoring access has traditionally relied on program-based data collection rather than course-based efforts, and the result has been a "series of inaccurate, inconsistent, and generally chaotic attempts to arbitrarily assign students to vocational education programs" (ibid., p. 44). To correct these problems, Hoachlander, Choy, and Brown (1989) suggest tying access into performance-based policies as well as relying on periodic collection of student transcripts, which would permit longitudinal analysis, to monitor access.
Illinois is currently pilot testing a technically advanced system that uses quality indicators to link planning, evaluation, and program improvement through data that are collected and analyzed at either the local or regional level. The system uses six indicators for measuring outcomes: placement and continuing education, enrollment, employer satisfaction, student satisfaction, employability skills attainment, and cost. Although the primary focus of the Illinois system is program improvement, it appears to satisfy most of the requirements in the pending federal legislation (Asche 1990).
Copa and Scholl (1983) report on efforts to verify and develop a set of indicators for use in measuring the outcomes of vocational education in Minnesota. A method of verification was used that took into consideration current social and economic issues, vocational education as a part of education more broadly conceived, and critical questions useful in examining any potential indicator. Use of this verification process resulted in the selection of four vocational education indicators that appeared to have adequate data available to support continued development: number of graduates employed, number of graduates employed in occupations related to program, employer's satisfaction with the quality of the graduate's work, and program cost. Although these indicators are heavily oriented toward labor market outcomes, the process described is useful for those developing systems of measuring vocational outcomes.
ISSUES AND CHALLENGES
Apling (1989) describes a set of potential problems that any performance standards system for vocational education needs to address: the impact of performance standards on those whom the program serves, with the danger that individuals most needing services will be the least likely to be served; the influence of performance standards on the types of training provided, with a danger that effective--but long-term and expensive--services will be discouraged in favor of short-term and inexpensive approaches; the difficulty in meeting multiple standards, some of which may not be compatible; the problem of adjusting standards for programs in different labor markets or serving different types of clients; and the difficulty of setting minimum standards (p. 18).
Apling also suggests that such questions as what performance is assessed, whose performance is assessed, who uses performance information and how do they use it, and what results from meeting, not meeting, or exceeding standards raise issues that must be resolved in order to implement a performance standard system for vocational education.
Potentially, any system of performance standards developed for use in vocational education can have both positive and egative effects. Asche (1990) cautions vocational educators to be aware that indicators tend to measure quantity, not quality; are designed to serve policymakers, not educators; aim for parsimony, not complexity; tend to reflect those things that can be easily measured; and unless there is reasonable agreement on what educational goals should be, may shape the curriculum (p. 5). On the other hand, Asche also points out that a performance indicator system can have positive aspects: locally developed indicators can provide opportunities for school-based improvement and the development of shared goals and values, indicators can be useful in monitoring policies and practice and improving schools, and indicators offer an opportunity for vocational education to be included in educational reforms (p. 6).
Asche, M. "Standards and Measures of Performance: Indicators of Quality for Virginia Vocational Education Programs." Paper prepared for the teleconference "Preparing a Competent Work Force through Indicators of Quality for Vocational Education." Blacksburg: Division of Vocational and Technical Education, Virginia Polytechnic Institute and State University, 1990 .
Copa, G., and Scholl, S. "Developing Vocational Education Indicators: Some Steps in Moving from Selection to Use." St. Paul: Minnesota Research and Development Center for Vocational Education, University of Minnesota, September 1983. (ED 235 385).
Hoachlander, E. G.; Choy, S. P.; and Brown, C. L. "Performance-Based Policies Options for Postsecondary Vocational Education: A Feasibility Study." National Assessment of Vocational Education Background Paper. Berkeley, CA: MPR Associates, March 1989.
Office of Technology Assessment. "Performance Standards for Secondary School Vocational Education: Background Paper.". Washington, DC: Office of Technology Assessment, April 1989. (ED 313 591).
This ERIC Digest was developed in 1990 by Susan Imel, ERIC Clearinghouse on Adult, Career, and Vocational Education, with funding from the Office of Educational Research and Improvement, U.S. Department of Education under Contract No. RI88062005. The opinions expressed in this report do not necessarily reflect the position or policies of OERI or the Department of Education.
Title: Vocational Education Performance Standards. ERIC Digest No. 96.
Descriptors: Access to Education; * Accountability; * Competence; * Educational Quality; Employment Level; Job Skills; Occupational Tests; * Outcomes of Education; Performance; Postsecondary Education; * Program Evaluation; Secondary Education; * Standards; Vocational Education
Identifiers: *Carl D Perkins Vocational Education Act 1984; ERIC Digests
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at