Cognitive Assessment

Do CAS Planning Subtests Measure Planning or Processing Speed?

Although I write mostly about the Catell-Horn-Carroll (CHC) Theory of Cognitive Abilities (e.g., Schneider & McGrew, 2012), I have long been an admirer of the Cognitive Assessment System (CAS; Naglieri & Das, 1997), which is associated with PASS Theory (Planning-Attention-Simultaneous-Successive; Das, Naglieri, & Kirby, 1994). CHC Theory and PASS Theory are to some degree rivals but thoughtful scholars have begun to show that they can be integrated productively (Flanagan, Alfonso, & Dixon, 2013; Kaufman & Kaufman, 2004).

There is evidence that the CAS is a stronger predictor of academic abilities than many of its competitors (e.g., Naglieri, De Lauder, Goldstein, & Schwebech, 2006). I think that some of the subtests in the CAS are marvelously ingenious. I am very much looking forward to the publication of the second edition of the CAS.

The theoretical basis of some of the subtests of the Cognitive Assessment System has been questioned by a number of scholars. In the planning subtests of the CAS, evaluees are asked to complete speeded tests as quickly as they can. However, the tests differ from typical tests measuring processing speed (Gs) in that the instructions direct evaluees to complete the task in whatever manner they wish. If evaluees use an efficient strategy, their scores will likely be higher.

There are seemingly contradictory findings about CAS planning tests. Cross-battery factor analyses suggest that rather than measuring “planning,” the CAS Planning subtests load with measures of Gs and are indistinguishable from the speeded attention tests on the CAS (Keith, Kranzler, & Flanagan, 2001; Kranzler & Keith, 1999). On the other hand, Haddad (2004) found (via experimental manipulations of the instructions for the Planned Codes subtest) that when participants had to complete the task sequentially (as in most Gs tests) the scores were about 1 SD higher than when when participants were allowed to use whatever strategy they wished. Haddad concluded that because the scores in the two conditions had relatively low correlations with each other that fundamentally different processes were being measured.

Can both sets of findings be explained? I believe that it is reasonable to assume that the planning subtests on the CAS are indeed measuring planning—but in the context of a Gs test. I believe that a thorough examination of the strategies that children use while taking the CAS planning subtests might reveal what is happening (It is possible that such studies already exist but I was unable to locate them.). The figure below shows simulated data consistent with cross-battery studies: the Planned Codes subtest is just another measure of the latent variable Gs.

CAS Planning Just GsNow imagine that we record whether the child used an efficient strategy or an inefficient strategy. The same data from above were used but now the evaluees are grouped by strategy use.

CAS Planning Is Gs And PlanningIn this simulation, 25% of evaluees used the efficient strategy. The efficient strategy boosted scores by 2 points (2/3 of a standard deviation). This is not a small effect yet the correlation between Gs and the subtest scores is about the same whether strategy use is controlled for or not (0.74 vs. 0.71). Thus, if you are looking for Gs, you will find it in this dataset. If you are looking for planning, you’ll find that, too.

The R code used to generate the simulated data (only the first 3000 cases were plotted):

n<-1000000 #Sample size
Gs<-rnorm(n) #Processing Speed
StrategyUse<-floor(runif(n)+0.25)
lGs<-sqrt(0.5)
Planning<-lGs*3*Gs+2*StrategyUse+10-mean(StrategyUse*2)+rnorm(n)*sqrt(9-(lGs*3)^2-var(StrategyUse*2)-2*cov(StrategyUse*2,3*lGs*Gs))

Note that I am not claiming to have found an explanation for the seemingly contradictory findings. I am merely suggesting that it is possible that they can be reconciled. Also note that there are many possible models that can explain the findings.

In the example above, strategy use and Gs were uncorrelated. However, it is likely that they are correlated to some degree (as all abilities are). Imagine that people with higher Gs are more likely to discover and use the efficient strategy:

Strategy Use

Thus, the R code is the same except that:

StrategyUse<-floor(runif(n)+pnorm(Gs-1))

If strategy use is not controlled for, the subtest looks like an even better measure than Gs than in the previous model:

CASPlanningJustGs1When strategy use is distinguished:

CASPlanningGsAndPlanning1

In this case, unless strategy use is observed and taken into account, it will appear as if the subtest measures Gs and nothing else. However, planning (i.e., using the efficient strategy) clearly does matter, giving a 4.6 point advantage (about 1.5 standard deviations). Again, this is just one of many possible models.

If observational data were used to develop a robust model of the relationship between strategy use and Gs, procedures could be developed such that the planning and Gs aspects of the tests could be estimated separately for individuals. A likely result of a thorough investigation of the planning component of the CAS subtests is that efficient strategies are more likely to be used by older children and adolescents. If the overwhelming majority of adolescents use efficient strategies, the tests are sensitive to planning ability only at the low end of the construct. Even so, clinical observations of strategy use could be used to more accurately identify planning deficits and distinguish them from slow processing speed.

References

Das, J. P., Naglieri, J. A., & Kirby, J. R. (1994). Assessment of cognitive processes: The PASS theory of intelligence. Boston: Allyn and Bacon.

Flanagan, D. P., Alfonso, V. C., & Dixon, S. G. (2014). Cross-battery approach to the assessment of executive functions. In S. Goldstein & J. Naglieri (Eds.), Handbook of Executive Functioning (pp. 379–409). Springer New York.

Haddad, F. A. (2004). Planning versus speed: An experimental examination of what Planned Codes of the Cognitive Assessment System measures. Archives of clinical neuropsychology, 19, 313–317.

Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman Assessment Battery for Children-Second Edition (KABC-II). Circle Pines, MN: American Guidance Service.

Keith, T. Z., Kranzler, J. H., & Flanagan, D. P. (2001). What does the Cognitive Assessment System (CAS) measure? Joint confirmatory factor analysis of the CAS and the Woodcock-Johnson Tests of Cognitive Ability (3rd ed.). School Psychology Review, 30, 89–119.

Kranzler, J. H., & Keith, T. Z. (1999). Independent confirmatory factor analysis of the Cognitive Assessment System (CAS): What does the CAS measure? School Psychology Review, 28, 117–144.

Naglieri, J. A., & Das, J. P. (1997). Cognitive Assessment System. Itasca, IL: Riverside.

Naglieri, J., De Lauder, B., Goldstein, S., & Schwebech, A. (2006). WISC-III and CAS: Which correlates higher with achievement for a clinical sample? School Psychology Quarterly, 21, 62–76.

Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll model of intelligence. In D. Flanagan & P. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 99–144). New York: Guilford.

Advertisements
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s