>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

 

From the CEEE and
the Clearinghouse on Assessment and Evaluation

Woodcock-Munoz Language Survey - English





Test Name: Woodcock-Munoz Language Survey - English
Publisher: Riverside Publishing
Publication Date: 1993
Test Type: Language Proficiency
Content: 4 Language Skills
Language: English
Target Population: English Language Learner
Grade Level: P,K,1,2,3,4,5,6,7,8,9,10,11,12,Adult
Administration Time: Untimed/guidelines
Standardized: Yes
Purpose: Language Dominance; Placement; Proficiency; Program Exit; Progress; Program Evaluation

Abstract:
Woodcock-Munoz Language Survey (English) is a set of tests that measure proficiency in oral language, reading, and writing in speakers of English as a second language who are at least two years of age. It provides scores for individual skills as well as an overall language competence score called Broad English Ability. The test can be used to classify an examinee's English proficiency through the use of percentile ranks, determine eligibility for bilingual services, help teachers understand an examinee's language abilities, assess an examinee's progress or readiness for English-only instruction, provide information about program effectiveness, and describe the language characteristics of subjects in research studies. The test is administered individually and items are presented using an illustrated easel book. The four subtests are : 1) Picture Vocabulary, in which examinees must name pictured objects, 2) Verbal Analogies, in which increasingly complicated relationships between words must be understood, 3) Letter-Word Identification, in which the least complicated items require that examinees match line drawings of objects with more explicit, colored pictures of the same objects, and more complicated items that require examinees to pronounce written words that are progressively less common in English, AND, 4) Dictation, in which low proficiency examinees demonstrate prewriting skills such as drawing lines and copying letters, and higher proficiency examinees make written responses to questions that demonstrate a knowledge of spelling, punctuation, capitalization, word usage, and word forms. The test administrator may avoid presenting examinees with items of an inappropriately low difficulty level by using the guidelines for estimating the approximate ability level of a given examinee. Over-estimates can be corrected by testing backwards in the test book until the appropriate level is reached. When an examinee has failed six consecutive items, the upper level of his proficiency has been reached. For each item, the correct response or responses appears on the test administrator's side of the easel, as well as some common incorrect responses and some vague responses that require further probing. Responses are recorded by the test administrator on a test record sheet, which is also where the written responses to the dictation portion of the test are made. There is more than one scoring option; Using charts printed on the test record, the test administrator can compute the raw score and convert it into age and grade equivalents as well as identifying the Instructional Range (the level of proficiency below which language tasks are easy for the examinee and above which they are difficult). The other option is to use either the 3.5-inch or the 5.25-inch IBM-compatible floppy disks to generate the Report of Language Proficiency Testing which features the same numerical scores as a hand-scored report, but includes several narrative paragraphs that explain the proficiency of the examinee. Regardless of the method of scoring, several additional composite scores are offered: The Broad English Ability Cluster, which is an overall test score; The Oral Language Cluster, which combines expressive vocabulary with verbal comprehension and is taken from the Picture Vocabulary and the Verbal Analogies sections; and, the Reading-Writing Cluster which measures reading identification and basic writing skills, and which is taken from the Letter-Word Identification and the Dictation sections. In addition, one of five proficiency levels is assigned: Advanced, Fluent, Limited, Very Limited, and Negligible. Norms for the test were established on a sample of 6,359 subjects from all over the country who were randomly sampled within a stratified sampling design. Split-half reliability coefficients for the four individual tests and the three composite scores are mostly in the .80s and .90s. For pre-school subjects, concurrent validity was estimated with seven other tests including the Boehm Basic Concepts Test, the Bracken Basic Concepts Test, and the Stanford-Binet IV. These measures of validity ranged from .117 to .802, with most of the lowest correspondences resulting from comparisons with the dictation portion of the Woodcock-Munoz. It should be noted that the dictation portion of the test is somewhat unique and that low concurrent validity from that portion may only mean that the other tests do not included a similar section. For all older age groups, concurrent validity was estimated by comparing the three cluster scores listed above with the Woodcock Language Proficiency Battery - Revised. Most coefficients ranged from .70 to .90 across age groups, with a few lower correspondences coming from the comparisons of the scores of children ages 6 and 9. Additional information about reliability and validity is found in the Comprehensive Manual.


Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership