>
Volume: | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 |
Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor if an article is to be used in a newsletter. |
Rather than using large samples and following a rigid protocol to examine a limited number of variables, case study methods involve an in-depth, longitudinal examination of a single instance or event. It is a systematic way of looking at what is happening, collecting data, analyzing information, and reporting the results. The product is a sharpened understanding of why the instance happened as it did, and what might be important to look at more extensively in future research. Thus, case studies are especially well suited toward generating, rather than testing, hypotheses.
Intended for the consumer of case studies, this article briefly discusses six types of case studies, based on the framework provided by Datta (1990). For each, we present the type of evaluation questions that can be answered, the functions served, some design features, and some pitfalls.
TYPES OF CASE STUDIES
Illustrative Case Studies are descriptive; they utilize one or two instances to show what a situation is like. This helps interpret other data, especially when there is reason to believe that readers know too little about a program. These case studies serve to make the unfamiliar familiar, and give readers a common language about the topic. The chosen site should be typical of important variations, and contain a small number of cases to sustain reader's interest.
There are pitfalls in presenting illustrative case studies. They require presentation of in-depth information on each illustration; there may not be time on-site for in-depth examination. The most serious problem is with the selection of instances. The case(s) must adequately represent the situation or program. Where significant diversity exists, it may not be possible to select a typical site.
Exploratory Case Studies are condensed case studies, undertaken before implementing a large-scale investigation. Where considerable uncertainty exists about program operations, goals, and results, exploratory case studies help identify questions, select measurement constructs, and develop measures; they also serve to safeguard investment in larger studies. The greatest pitfall in the exploratory study is prematurity: the findings may seem convincing enough to be released inappropriately as conclusions. Other pitfalls include the tendency to extend the exploratory phase, and inadequate representation of diversity.
Critical Instance Case Studies examine one or a few sites for one of two purposes. A very frequent application is the examination of a situation of unique interest, with little or no interest in generalizability. A second, rarer, application entails a highly generalized or universal assertion which is called into question, and we can test it by examining one instance. This method is particularly suited for answering cause-and-effect questions about the instance of concern. The most serious pitfall in this application is inadequate specification of the evaluation question. The importance of probing the underlying concerns in a request is crucial to the appropriate application of the critical instance case study.
Program Implementation Case Studies help discern whether implementation is in compliance with its intent. These case studies are also useful when concern exists about implementation problems. Extensive, longitudinal reports of what has happened over time can set a context for interpreting a finding of implementation variability. In either case, generalization is wanted and the evaluation questions must be carefully negotiated with the customer. A requirement for good program implementation case studies is investment of sufficient time to obtain longitudinal data and breadth of information. Multiple sites are typically required to answer program implementation questions; this imposes demands on training and supervision needed for quality control. The demands of data management, quality control, validation procedures, and analytic model (within site, cross site, etc.) may lead to cutting too many corners to maintain quality.
Program Effects Case Studies can determine the impact of programs and provide inference about reasons for success or failure. Like the program implementation case study, the evaluation questions usually require generalizability and, for a highly diverse program, it may be difficult to answer the questions adequately and retain a manageable number of sites. There are methodological solutions to this problem. One is to first conduct the case studies in sites chosen for their representativeness, then verify these findings through examination of administrative data, prior reports, or a survey. Another solution is to use other methods first. After identifying findings of specific interest, case studies could then be implemented in selected sites to maximize the usefulness of the information.
Cumulative Case Studies aggregate information from several sites collected at different times. The cumulative case study can be retrospective, collecting information across studies done in the past, or prospective, structuring a series of investigations for different times in the future. Retrospective cumulation allows generalization without cost and time of conducting numerous new case studies; prospective cumulation also allows generalization without unmanageably large numbers of cases in process at any one time. The techniques for ensuring sufficient comparability and quality and for aggregating the information are what constitute the "cumulative" part of the methodology. Two features of the cumulative case study are the case survey method, used as a means of aggregating findings, and backfill techniques. The latter are helpful in retrospective cumulation as a means of obtaining information from authors that permits use otherwise insufficiently detailed case studies. Opinions vary as to the credibility of cumulative case studies for answering program implementation and effects questions. One authority notes that publication biases may favor programs that seem to work, which could lead to a misleading positive view (Berger, 1983). Others are concerned about problems in verifying the quality of the original data and analyses (Yin, 1989).
CONCLUSIONS
The case study is a method of learning about a complex instance through extensive description and contextual analysis. The product is an articulation of why the instance occurred as it did, and what may be important to explore in similar situations.
We have presented six types of case study application, with different strengths and limitations. Evaluators considering the case study as a design for evaluation must first decide what type of evaluation question they have and then examine the ability of each type of case study to answer it. The crucial next step is in determining whether the methodological requirements of the chosen case study method can be met in the situation at hand.
Case studies can generate a great deal of data that may not be easy to analyze. Details on conducting a case study, especially with regard to data collection and analysis, can be found in the references listed below.
REFERENCES
Berger, Michael A. "Studying Enrollment Decline (and Other Timely Issues) via the Case Survey."
Educational Evaluation and Policy Analysis, 5:3 (1983), 307-317.
Datta, Lois-ellin (1990). Case Study Evaluations. Washington, DC: U.S. General Accounting Office, Transfer paper 10.1.9.
Miles, Matthew B., and Huberman, A.M. (1984). Qualitative Data Analysis: A Sourcebook of New Methods. Beverly Hills, CA: Sage.
Yin, Robert K. (1989). Case Study Research: Design and Methods. Beverly Hills, CA: Sage. | |||||||||||||
Descriptors: *Case Studies; Educational Assessment; *Program Evaluation; Program Implementation; Qualitative Research; *Research Methodology |
Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemape 5 - Sitemap 6