Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us



From the ERIC database

Alternative Assessment and Second Language Study: What and Why? ERIC Digest.

Hancock, Charles R.

Alternative assessment, authentic assessment, portfolio assessment, self- assessment, self-monitoring, and the list goes on. Clearly, assessment is a popular topic these days. Frequently encountered in professional publications, workshops, inservice training, and college courses, assessment meets the criteria for being a cutting-edge topic. Why is there such an emphasis on assessment in the 1990's? What does an emphasis on assessment mean for language teachers, researchers, and students? This Digest looks at these questions and discusses some of the practical implications of assessing language students differently than we currently do.

One useful way to think about assessment is to contrast it with testing, an ever-present factor that confronts teachers and students in all disciplines. Tests have come to be an accepted component of instructional programs throughout the world. Sometimes tests are justified on the basis of accountability: are students learning what they are supposed to be learning? Decision-makers need this type of evidence in order to make judgments about how to spend resources, for example. Sometimes, tests are viewed as feedback for language students concerning their progress. Oller (1979, p. 401) stated that "the purpose of tests is to measure variance in performances of various sorts." In this sense, testing-- typically achievement testing--serves as a monitoring device for learning. Tests are given at a particular point in time to "sample" student learning. Most of us are familiar with "paper and pencil" tests even if they take on a computerized format. Ordinarily, after the test is given, some type of reporting takes place, often in the form of a single score or grade. Sometimes, decisions are made based on test results (e.g., retake the test, pass the course, go on to the next unit of instruction, etc.). A final important aspect of testing is that the test is usually kept hidden from the students until it is administered, indicating a degree of secrecy in order to assure confidentiality.

Let's assume that this simple characterization of tests and testing is correct. Assessment then can be shown to be very different. Some important differences between testing and assessment become obvious. In an instructional program, assessment is usually an ongoing strategy through which student learning is not only monitored--a trait shared with testing--but by which students are involved in making decisions about the degree to which their performance matches their ability. Spolsky (1992, p. 38) rightly argues that diagnostic or formative assessment is typically curriculum-driven. This type of assessment shadows the curriculum and provides feedback to student and teachers. He wisely argues, too, for a multilevel system that combines testing and assessment. A paraphrase of this model (p. 37) would go something like this:

* Students are provided opportunities before and after units of instruction to assess their own performance (self-assessment).

* Teachers periodically assess students' performance and both discuss their respective assessments (tests and measurements).

* Occasionally, some external monitor assesses the student's (and perhaps the teacher's) performance and discusses it with the teacher.

Assessment, then, should be viewed as an interactive process that engages both teacher and student in monitoring the student's performance. Criterion-referenced testing is clearly based on this way of relating teaching-testing-assessment for congruence. Interested readers will find the 1994 Northeast Conference Report (Hancock, 1994) a valuable resource on this topic.

Many of the reigning theoretical assumptions on which contemporary testing and assessment rely are based on behaviorist views of cognition and development. In the 1990's, we have come to realize that new, alternative ways of thinking about learning and assessing learning are needed. Gardner (1993) argues that there is a resurgence of interest in the idea of multiplicity of intelligences. He and other researchers claim the existence of mental modules (i.e., fast- operating, reflexlike, information processing devices). Fodor (1983) espoused the view that there are separate analytic devices involved in tasks like syntactic parsing, tonal recognition, and facial perception. Others (Sternberg, 1988, Perkins, 1981, Gruber, 1985) have investigated the concept of creativity. Their studies have shown that creative individuals do not have unique mental modules, but they use what they have in more efficient and flexible ways. Such individuals are extremely reflective about their activities, their use of time, and the quality of their products (Gardner, 1993).

So, while the operative is "alternative," we must ask alternative to what? A case can be made in second languages for an alternative to conventional ways of monitoring students' language progress and performance. Alternative assessment is an ongoing process involving the student and teacher in making judgments about the student's progress in language using non-conventional strategies.

A new assessment initiative in foreign and second language study should acknowledge the effect of context on performance and provide the most appropriate contexts in which to assess competence, including ones that involve the individual in making self-assessments. Brecht and Walton (1993, p. 2) define competence as "the capacity to perform a range of occupationally or professionally relevant communicative tasks with members of another cultural and linguistic community using the language of that community, whether that community is domestic or abroad." They also call for a field-specific language learning framework designed to guide the defining of competencies and "how these competencies are best acquired so as to focus scarce resources in the most efficient manner possible on curricular design, the development of instructional materials, the application of new teaching methodologies, teacher training and assessment, and research related to language acquisition" (pp. 8-9).

Wiggins (1994) has identified a set of criteria by which to distinguish authentic forms of testing. His list includes the important notion of making the criteria and standards clear--de-mystifying them--so that accurate self- assessment and self-adjustment by the student can be fostered. Yap (1993) reported the results of a research project involving thirty-five adult basic (ABE) and English as a second language (ESL) programs. Writing assessment, portfolio assessment, and classroom assessment were shown to be valid approaches to the type of authentic assessment called for within the profession. Pierce, Swain, and Hart (1993) reported on a study of 500 eighth-grade students, suggesting that self-assessment was a valid and reliable measure of language proficiency. Pavis (1988) reported similar results for college students learning French based on a journal writing project in which students monitored their own learning and identified problems encountered as well as accomplishments over the course of the term. Allwright (1988) has argued that greater quality of learning can be ensured by putting the control over learning in the place where the learning is occurring, namely in the mind of the learner. According to studies such as these, alternative assessment that involves the learner in self- assessment is recommended, despite possible claims of subjectivity as a negative factor in their use. Heilenmann (1990) and Blanche (1990), in separate research projects, reported results involving students in self-assessment.

Portfolio assessment is an ongoing process involving the student and teacher in selecting samples of student work for inclusion in a collection, the main purpose of which is to show the student's progress. The use of this procedure is increasing in the language field, particularly with respect to the writing skill. It makes intuitive sense to involve students in decisions about which pieces of their work to assess, and to assure that feedback is provided. Both teacher and peer reviews are important. Perhaps the greatest overall benefit of using portfolio assessment is that the students are taught by example to become independent thinkers, and the development of their autonomy as learners is facilitated.

It is important to remember that a portfolio is much more than a simple folder of student work. A wide variety of portfolios exists: working portfolio, performance portfolio, assessment portfolio, group portfolio, application (e.g., for college admission) portfolio, and so forth. Depending on the purpose, one is likely to find any of these items: samples of creative work; tests; quizzes; homework; projects and assignments; audiotapes of oral work; student diary entries; log of work on a particular assignment; self-assessments; comments from peers; and comments from teachers.

Even young students know that some of them simply do not do well on tests, often not because of a failure on their part to study or prepare. Because language performance depends so heavily on the purposes for which students are using the language and the context in which it is done, the importance of opportunity for flexible and frequent practice on the part of the students can not be overestimated. In the real world, most of us have more than one opportunity to demonstrate that we can complete tasks successfully, whether at work or in social settings. So, it makes sense to provide similar opportunities for students in instruction.

The call for increased use of meaningful (authentic) assessments that involve language students in selecting and reflecting on their learning means that language teachers will have a wider range of evidence on which to judge whether students are becoming competent, purposeful language users. It also means that language programs will become more responsive to the differing learning styles of students and value diversity therein. Finally, language programs that focus on alternative assessment are likely to instill in students lifelong skills related to critical thinking that build a basis for future learning, and enable them to evaluate what they learn both in and outside of the language class.

Allwright, R. (1988). Autonomy and individuation in whole class instruction. In Brooks, A. & Grundy, P., (Eds.), "Individuation and autonomy in language learning," p35-44. British Council.

Blanche, P. (1990). Using standardized achievement and oral proficiency tests for self-assessment purposes: The DLIFLC study. "Language Testing," 7, p202-229.

Brecht, R., & Walton, R. (1993). "National strategic planning in the less commonly taught languages. Occasional papers." Washington, DC: National Foreign Language Center.

Fodor, J. (1983). "The modularity of the mind." Cambridge, MA: MIT Press.

Gardner, H. (1993). "Multiple Intelligences: The theory in practice." New York: Basic Books.

Gruber, H. (1985). Giftedness and moral responsibility: Creative thinking and human survival. In Horowitz, F., & O'Brien, M., (Eds.), "The gifted and the talented: Developmental perspectives." Washington, DC: American Psychological Association.

Hancock, C.R. (Ed.). (1994). "Teaching, testing, and assessing: Making the connection. Northeast Conference Reports." Lincolnwood, IL: National Textbook Co.

Heilenmann, K.L. (1990). Self-assessment of second language ability: The role of response effects. "Language Testing," 7, p174-201.

Oller, J.W., Jr. (1979). "Language tests at school." London: Longman.

Pavis, J. (1988). Le carnet de bord (The ship's log). "Le Francais dans le Monde," 218, p54-57.

Peirce, B.N., Swain, M., & Hart, D. (1993). Self-assessment in two French immersion programs. "Applied Linguistics," 14, p25-42.

Perkins, D. (1981). "The mind's best work." Cambridge, MA: Harvard University Press.

Spolsky, B. (1992). Diagnostic testing revisited. In Shohamy, E., & Walton, R.A., (Eds.), "Language assessment and feedback: Testing and other strategies" (p29-39). National Foreign Language Center. Dubuque, IA: Kendall/Hunt Publishing Co.

Sternberg, R. (Ed.). "The nature of creativity." New York: Cambridge University Press.

Wiggins, G. (1994). Toward more authentic assessment of language performances. In Hancock, C. R. (Ed.), "Teaching, testing, and assessment: Making the connection. Northeast conference reports." Lincolnwood, IL: National Textbook Co.

Yap, K.O. (1993). "Integrating assessment with instruction in ABE/ESL programs." Paper presented at the annual meeting of the American Educational Research Association. (ED 359 210)


This report was prepared with funding from the Office of Educational Research and Improvement, U.S. Dept. of Education, under contract no. RR93002010. The opinions expressed do not necessarily reflect the positions or policies of OERI or ED.

Title: Alternative Assessment and Second Language Study: What and Why? ERIC Digest.
Author: Hancock, Charles R.
Publication Year: Jul 1994
Document Identifier: ERIC Document Reproduction Service No ED376695
Document Type: Eric Product (071); Eric Digests (selected) (073)
Target Audience: Teachers and Practitioners

Descriptors: Comparative Analysis; * Evaluation Methods; * Portfolio Assessment; Second Language Instruction; Second Language Learning; Second Language Programs; * Student Evaluation; * Testing

Identifiers: *Alternative Assessment; ERIC Digests

Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles


 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership