>
Volume: 8 7 6 5 4 3 2 1



A peer-reviewed electronic journal. ISSN 1531-7714 
Search:
Copyright 2001, EdResearch.org.

Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor if an article is to be used in a newsletter.


Find similar papers in
    ERICAE Full Text Library
Pract Assess, Res & Eval
ERIC RIE & CIJE 1990-
ERIC On-Demand Docs
 
Find articles in ERIC written by
     Cassady, Jerrell C.
Cassady, Jerrell C. (2001). The stability of undergraduate students' cognitive test anxiety levels. Practical Assessment, Research & Evaluation, 7(20). Retrieved August 18, 2006 from http://edresearch.org/pare/getvn.asp?v=7&n=20 . This paper has been viewed 8,437 times since 8/23/01.

The Stability of Undergraduate Students’ Cognitive Test Anxiety Levels

Jerrell C. Cassady
Department of Educational Psychology
Ball State University

Test anxiety has been overwhelmingly identified as a two-factor construct, consisting of the cognitive (often referred to as "worry") and emotional (or affective) components (Morris, Davis, & Hutchings, 1981; Schwarzer, 1986). The predominant view of the relationship between these two factors suggests the cognitive component directly impacts performance (Bandalos, Yates, & Thorndike-Christ, 1995; Cassady & Johnson, in press; Hembree, 1988), while the emotionality component is related but does not directly influence test performance (Sarason, 1986; Williams, 1991). The apparent relationship between emotionality and test performance is such that emotionality impacts test performance only under situations where the individual also maintains a high level of cognitive test anxiety (Deffenbacher, 1980; Hodapp, Glanzmann, & Laux, 1995). Although emotionality has traditionally not been viewed as central to performance, recent work has demonstrated that emotionality may be the triggering mechanism for self-regulation strategies that facilitate performance (Schutz & Davis, 2000).

This study investigated the stability of test anxiety over time by examining the level of reported cognitive test anxiety at three points in an academic semester (all proximally close to exams). The comparisons between levels of test anxiety over time were intended to track fluctuations in level of anxiety over time and across testing formats. The expectation was that cognitive test anxiety is a relatively stable (trait-like) construct, and that students’ levels of anxiety would reflect a high degree of similarity within subjects over time. The underlying purpose of the study was to determine if it is indeed necessary to evaluate levels of test anxiety for each test taken, or if test anxiety is stable enough that evaluation at one point in time is sufficient for research that spans multiple exams. Because stability is largely influenced by the internal consistency or reliability of the measure in, it was also imperative to investigate the level of scale reliability at each point in the study. Under conditions where a scale is (a) internally consistent (reliable), and (b) demonstrates high levels of similarity in response over extended periods of time, it is reasonable to establish that the measured construct is stable (Nunnally & Bernstein, 1994).

Standard conceptualizations of the cognitive test anxiety construct have addressed the interplay between state and trait anxiety (Snow, Corno, & Jackson, 1996; Spielberger & Vagg, 1992). In this conceptualization, individuals with high levels of cognitive test anxiety generally hold heightened levels of trait anxiety, but in evaluative situations, their state anxiety also elevates (Zeidner, 1995). This combinatory relationship can lead to feelings of anxiety that interfere with test performance through blocks to cue utilization, attenuated attentional resources, or mere cognitive interference from the worries and fears induced by anxiety (Geen, 1980; Hembree, 1988; Sarason, 1986). This relationship has also been characterized as an additive function of the dispositional and situational anxiety influences faced by students in evaluative scenarios (Zohar, 1998).

The attention to the situation-specific factors that lead to test anxious thoughts and behaviors has promoted the methodological practice of gathering test anxiety data as close to the testing event as possible, to best capture the contextual influences to anxiety in the research design (Cassady & Johnson, in press; Covington, 1985; Hodapp et al., 1995; Zeidner, 1998). Thus, the ideal time to test for test anxiety would be during an examination itself, with the subject providing "online" responses to the immediate feelings, fears, and behavioral responses that are arising during evaluation. However, this cannot occur in research designs that use students in actual testing situations, due to the probability of inducing additional debilitative cognitive test anxiety by having the student respond to items addressing their level of worry or fear for tests. Further, the implication is that in order to measure the performance-test anxiety relationship over a series of tests would require repeated administration of the test anxiety measures. These conditions are pragmatically undesirable, and potentially unnecessary.

Method

Participants

Undergraduate students in an introductory educational psychology course were the participants in this investigation. The participants completed the study instruments during one academic semester, as one option for completing course credit. Sixty-four undergraduate students participated in the three phases of study, with several participants completing only portions of the data. The participants were predominantly White (n = 62), with two Black students participating (which included all Black students available for participation). Consistent with the course population, there were 47 females and 17 males.

Materials

All measures in this study have been validated and found to have high levels of internal consistency (Cassady, 2001b; Cassady & Johnson, in press). However, there has been no investigation with the Cognitive Test Anxiety scale to identify the test-retest reliability, or the stability of test anxiety over time with this instrument.

The Cognitive Test Anxiety scale (Cassady & Johnson, in press) is a 27-item measure focused on only the cognitive domain of test anxiety, formerly referred to as worry. The cognitive domain includes the tendency to engage in task-irrelevant thinking during test taking and preparation periods, the tendency to draw comparisons to others during test taking and preparation periods, and the likelihood to have either intruding thoughts during exams and study sessions, or have relevant cues escape the learner’s attention during testing. In previous investigations of the Cognitive Test Anxiety scale, involving over 1,000 participants, a reliable method for determining high and low levels of test anxiety has been documented (Cassady, 2001a; 2001b; Cassady & Johnson, in press), which splits respondents into three levels of test anxiety. The low group scores range from 27 to 61, moderate test anxiety group ranges from 62 to 71, and responses that score over 72 are categorized as the high test anxiety group (maximum score possible = 108).

Sarason’s (1984) Bodily Symptoms subscale from the Reactions to Tests four-factor scale of test anxiety was used as a measure of the emotionality component of test anxiety. The Bodily Symptoms scale has been shown to have an acceptable degree of internal consistency, despite the short length (k = 10; Sarason, 1984; 1986).

Procedures

Throughout the course of one academic semester, students were invited to complete the Cognitive Test Anxiety scale and the Bodily Symptoms subscale three times. The timing of completion of these scales was aligned with the testing times in the course. All students completed the scales no more than seven days before they took the course examination. The time was variable due to the students’ ability to choose which day to complete the examination. All scales were completed in groups, in the students’ course classroom. Scale completion generally took between 8 and 15 minutes.

The first two course exams were 35-item multiple-choice tests, administered online at the students' convenience. However, the third exam was a take-home, open book exam that had the same weight in overall course grade as the other two exams. Due to the methods of data collection that were designed to maintain student confidentiality as related to the test anxiety and emotionality responses, matching students test scores to the dependent variables in this study was not possible.

Results

Data analyses focused on examining the stability of the students’ cognitive test anxiety reports, and their perceptions of emotionality as measured by bodily symptoms. Correlation analyses were used to illustrate the students’ stable reactions to the test events posed in the course. In addition, internal consistency for each scale was calculated to identify the level of reliability demonstrated in each administration of the two dependent measures. Finally, correction for attenuation of the correlations among the measures was conducted to provide an analysis of the hypothetical true score correlations given the condition that no measurement error were present in the study measures (Nunnally & Bernstein, 1994).

Initially, descriptive analyses on the test anxiety scores demonstrated that the students in this course had somewhat lower levels of test anxiety than previous uses of the scale (Cassady & Johnson, in press; Cassady 2001a; 2001b). The average cognitive test anxiety scores were similar for test 1 (M = 62.44, SD = 14.41, n = 59), test 2 (M = 62.46, SD = 15.39, n = 57), and test 3 (M = 61.49, SD = 14.94, n = 57), which all place the average score at the pre-established cut-off point between low and moderate cognitive test anxiety (Cassady, 2001a; 2001b). The mean scores on the bodily symptoms measure also were stable across the three administrations, and were somewhat lower than average. With a possible score range of 10 to 40, the average scores were in the low range for test 1 (M = 15.73, SD = 5.10, n = 56), test 2 (M = 16.41, SD = 5.73, n = 56), and test 3 (M = 15.48, SD = 5.62, n = 56).

To measure the stability of cognitive test anxiety and emotionality over the course of a semester, correlational analyses of the three points of administration were conducted (see Table 1). The results demonstrate that there is a very strong correlation between the students’ reports of cognitive test anxiety across three points in the semester, as well as strong correlations among the three emotionality measurements. Further, the correlations between test anxiety and bodily symptoms are significant, and consistent with earlier research on the relationship between the two primary factors of test anxiety (Hembree, 1988). The correlations between cognitive test anxiety and bodily symptoms did reveal that the bodily symptoms scale administration that took place before the second exam was the strongest correlation for all three administrations of the Cognitive Test Anxiety scale. This deviates from the expectation that suggests contextual factors in place at each administration session would drive the cognitive and emotional components given together to be most similar. Although these correlational values do not vary greatly, the pattern may be due to the fact that the students adjusted their reports of emotionality in response to the first course examination. This is supported by the fact that the second test administration period was also the point at which the Bodily Symptoms subscale mean was highest for the population.

Table 1
Cognitive Test Anxiety and Bodily Symptoms Intercorrelation Matrix

Measure 1 2 3 4 5 6
1. CTA (Test 1)1 .94
(59)
.96 .94 .63 .70 .51
2. CTA (Test 2) .91 
(52)
.95 
(57)
.98 .56 .73 .54
3. CTA (Test 3) .88 
(53)
.93
(55)
.94
(57)
.52 .70 .58
4. BS (Test 1) 2 .58
(56)
.52
(50)
.48
(50)
.91
(56)
.92 .87
5. BS (Test 2) .64
(51)
.67
(56)
.64
(54)
.82
(49)
.88
(56)
.94
6. BS (Test 3) .47
(52)
.50
(54)
.54
(56)
.79
(49)
.84
(53)
.91
(56)
Note: All p’s < .001. Values on the diagonal indicate Cohen’s Alpha Inter-item reliability coefficient. Values on the top half of the table reflect the correlations after correction for attenuation, while the bottom half are not corrected. Values in parentheses report sample size for each analysis.
1
Cognitive Test Anxiety scale score.
2 Bodily Symptoms subscale score.

 

Finally, the internal consistencies of the two dependent measures were reported (see Table 1) for each of the three administration periods, using Cronbach’s Alpha (Nunnally & Bernstein, 1994). These measures of internal consistency confirmed previous work that demonstrated high levels of internal reliability, and were subsequently used to correct the correlational values for attenuation due to measurement error. The corrections for attenuation did not reveal any changes to the patterns of intercorrelations.

Discussion

The results demonstrate that it is methodologically practical to make use of test anxiety data gathered at times other than when a particular test in question is being completed, given that the data have been collected during a time in which typical test-induced contextual variables are activated. That is, it does not appear to be important to gather test anxiety data prior to every test upon which the anxiety construct is being used in research analyses, simply prior to any test. The high internal consistency paired with the high degree of correlation among repeated administrations of the Cognitive Test Anxiety scale and Bodily Symptoms subscale provide strong evidence that the constructs have long-range stability (Nunnally & Bernstein, 1994).

The data provide useful information regarding an efficient and methodologically sound approach for collecting test anxiety data from undergraduate students. It is reasonable to extrapolate from these results that test anxiety data collected in close proximity to an evaluative event can be used in analyses of the impact of test anxiety on any test within that academic period. The data also demonstrated that the Cognitive Test Anxiety scale, which has been shown to have high levels of internal consistency and high construct validity (Cassady & Johnson, in press), also provides stable and consistent measures of test anxiety over time and across testing formats.

One theoretical implication of these results also relates to the interpretation of test anxiety as a failure at multiple levels of information processing (Benjamin, McKeachie, Lin, & Holinger, 1981; Naveh-Benjamin, 1991; McKeachie, 1984). The research in this area has confirmed that students with high test anxiety are not only prone to failure in situations where time factors attenuate performance and retrieval of key information, but even in take-home examinations (Benjamin et al., 1981). Although exam performances were not available for this study, and no conclusions regarding the stability of a detrimental impact of test anxiety on performance can be drawn, the results demonstrate that the level of reported anxiety is consistent over time, despite variations in course exam format. That is, the level of anxiety induced by the take-home examination did not differ significantly from the level of anxiety induced by the closed book multiple-choice examinations. Therefore, it seems that test anxious thoughts and behaviors are likely prompted by the presence of evaluative tasks, regardless of testing format. This finding extends previous work demonstrating no differential rates of cognitive test anxiety induced by in-class and online testing formats (Cassady, 2001a).

 

References

Bandalos, D. L., Yates, K., & Thorndike-Christ, T. (1995). Effects of math self-concept, perceived self-efficacy, and attributions for failure and success on test anxiety. Journal of Educational Psychology, 87, 611-623.

Benjamin, M., McKeachie, W. J., Lin, Y., & Holinger, D. P. (1981). Test anxiety: Deficits in information processing. Journal of Educational Psychology, 73, 816-824.

Cassady, J. C. (2001a). The effects of online formative and summative assessment on undergraduate students’ achievement and cognitive test anxiety. Manuscript submitted for publication.

Cassady, J. C. (2001b). Cognitive test anxiety in undergraduate students in Kuwait and the United States. Manuscript submitted for publication.

Cassady, J. C., & Johnson, R. E. (in press). Cognitive test anxiety and academic performance. Contemporary Educational Psychology.

Covington, M. V. (1985). Test anxiety: Causes and effects over time. In H. M. van der Ploeg, R. Schwarzer, & C. D. Spielberger (Eds.) Advances in Test Anxiety Research (Vol. 4) (pp. 55-68). Lisse, The Netherlands: Swets & Zeitlinger.

Deffenbacher, J. L. (1980). Worry and emotionality in test anxiety. In I. G. Sarason, (Ed.) Test anxiety: Theory, research, and applications (pp. 111-124). Hillsdale, NJ: Lawrence Erlbaum.

Geen, R. G. (1980). Test anxiety and cue utilization. In I. G. Sarason (Ed.), Test anxiety: Theory, research, and applications (pp. 43-61). Hillsdale, NJ: Lawrence Erlbaum.

Hembree, R. (1988). Correlates, causes, and treatment of test anxiety. Review of Educational Research, 58, 47-77.

Hodapp, V., Glanzmann, P. G., & Laux, L. (1995). Theory and measurement of test anxiety as a situation-specific trait. In C. D. Spielberger & P. R. Vagg (Eds.) Test anxiety: Theory, assessment, and treatment (pp. 47-59). Washington, D.C.: Taylor & Francis.

McKeachie, W. J. (1984). Does anxiety disrupt information processing or does poor information processing lead to anxiety? International Review of Applied Psychology, 33, 187-203.

Morris, L. W., Davis, M. A., & Hutchings, C. H. (1981). Cognitive and emotional components of anxiety: Literature review and a revised worry-emotionality scale. Journal of Educational Psychology, 73, 541-555.

Naveh-Benjamin, M. (1991). A comparison of training programs intended for different types of test-anxious students: Further support for an information-processing model. Journal of Educational Psychology, 83, 134-139.

Naveh-Benjamin, M., McKeachie, W. J., & Lin, Y. (1987). Two types of test-anxious students: Support for an information processing model. Journal of Educational Psychology, 79, 131-136.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd Ed.). New York: McGraw-Hill.

Sarason, I. G. (1984). Stress, anxiety, and cognitive interference: Reactions to Tests. Journal of Personality and Social Psychology, 46, 929-938.

Sarason, I. G. (1986). Test anxiety, worry, and cognitive interference. In R. Schwarzer (Ed.) Self-related cognitions in anxiety and motivation (pp. 19-34). Hillsdale, NJ: LEA.

Schutz, P. A., & Davis, H. A. (2000). Emotions and self-regulation during test taking. Educational Psychologist, 35, 243-256.

Schwarzer, R. (1986). Self-related cognitions in anxiety and motivation: An introduction. In R. Schwarzer (Ed.), Self-related cognitions in anxiety and motivation (pp. 1- 18). Hillsdale, NJ: LEA.

Snow, R. E., Corno, L., & Jackson, D. (1996). Individual differences in affective and conative functions. In D. C. Berliner & R. C. Calfee (Eds.) Handbook of educational psychology (pp. 243-310). New York: Macmillan.

Spielberger, C. D., & Vagg, P. R. (1995). Test anxiety: A transactional process model. In C. D. Spielberger & P. R. Vagg (Eds.) Test anxiety: Theory, assessment, and treatment (pp. 1-14). Washington, D.C.: Taylor & Francis.

Williams, J. E. (1991). Modeling test anxiety, self concept and high school students’ academic achievement. Journal of Research and Development in Education, 25, 51-57.

Zeidner, M. (1995). Adaptive coping with test situations: A review of the literature. Educational Psychologist, 30(3), 123-133.

Zeidner, M. (1998). Test anxiety: The state of the art. New York: Plenum Press.

Zohar, D. (1998). An additive model of test anxiety: Role of exam-specific expectations. Journal of Educational Psychology, 90, 330-340.

 

Descriptors: *Test Anxiety; *Test Reliability; Test Construction; Test Validity

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemape 5 - Sitemap 6