Cognitive Assessment, Research Link

No, the WISC-IV doesn’t underestimate the intelligence of children with autism.

The title of a new study asks “Does WISC-IV underestimate the intelligence of autistic children?” The authors’ answer is that it probably does. I believe that the reasoning behind this conclusion is faulty.

This study gives the unwarranted impression that it is a disservice to children with autism to use the WISC-IV. Let me be clear—I want to be helpful to children with autism. I certainly do not wish to do anything that hurts anyone. A naive reading of this article leads us to believe that there is an easy way to avoid causing harm (i.e., use the Raven’s Progressive Matrices test instead of the WISC-IV). In my opinion, acting on this advice does no favors to children with autism and may even result in harm.

Based on the evidence presented in the study, the average score differences between children with and without autism is smaller on Raven’s Progressive Matrices (RPM) and larger on the WISC-IV. The rhetoric of the introduction leaves the reader with the impression that the RPM is a better test of intelligence than the WISC-IV. Once we accept this, it is easy to discount the results of the WISC-IV and focus primarily on the RPM.

There is a seductive undercurrent to the argument: If you advocate for children with autism, don’t you want to show that they are more intelligent rather than less intelligent? Yes, of course! Doesn’t it seem harmful to give a test that will show that children with autism are less intelligent? It certainly seems so!

Such rhetoric reveals a fundamental misunderstanding of what individual intelligence tests like the WISC-IV are designed to do. In the vast majority of settings, they are not for certifying how intelligent a person is (whatever that means!). Their primary purpose is to help psychologists understand what a person can and cannot do. They are designed to help explain what is easy and what is difficult for a person so that appropriate interventions can be selected.

The WISC-IV provides a Full Scale IQ, which gives an overall summary of cognitive functions. However, it also gives more detailed information about various aspects of ability. Here is a graph I constructed from Figure 1 in the paper. In my graph, I converted percentiles to index scores and rearranged the order of the scores to facilitate interpretation.

asdf

Average Raven’s Progressive Matrices (RPM) and WISC-IV scores for children with and without autism

It is clear that the difference between the two groups of children is small for the RPM. It is also clear that the difference is also small for the WISC-IV Perceptual Reasoning Index (PRI). Why is this? The RPM and the PRI are both nonverbal measures of logical reasoning (AKA fluid intelligence). Both the WISC-IV and the RPM tell us that, on average, children with autism perform relatively well in this domain. The RPM is a great test, but it has no more to tell us. In contrast, the WISC-IV not only tells us what children with autism, on average, do relatively well, but also what they typically have difficulty with.

It is no surprise that the largest difference is in the Verbal Comprehension Index (VCI), a measure of verbal knowledge and language comprehension. Communication problems are a major component of the definition of autism. If children with autism had performed equally well on the VCI, we would wonder whether the VCI was really measuring what it was supposed to measure. Note that I am not saying that a low score on VCI is a requirement for the diagnosis of autism or that the VCI is the best measure of the kinds of language problems that are characteristic of autism. Rather, I am saying that children with autism, on average, have difficulties with language comprehension and that this difference is manifest to some degree in the WISC-IV scores.

The WISC-IV scores also suggest that, on average, children with autism not only have lower scores in verbal knowledge and comprehension, they are more likely to have other cognitive deficits, including in verbal working memory (as measured by the WMI) and information processing speed (as measured by the PSI).

Thus, as a clinical instrument, the WISC-IV performs its purpose reasonably well. Compared to the RPM, it gives a more complete picture of the kinds of cognitive strengths and weaknesses that are common in children with autism.

If the researchers wish to demonstrate that the WISC-IV truly underestimates the intelligence of children with autism, they would need to show that it underpredicts important life outcomes among this population. For example, suppose we compare children with and without autism who score similarly low on the WISC-IV. If the WISC-IV underestimated the intelligence of children with autism, they would be expected to do better in school than the low-scoring children without autism. Obviously, a sophisticated analysis of this matter would involve a more complex research design, but in principle this is the kind of result that would be needed to show that the WISC-IV is a poor measure of cognitive abilities for children with autism.

Standard
CHC Theory, Cognitive Assessment

Intelligence and the Modern World of Work: A Special Issue of Human Resource Management Review

Charles Scherbaum and Harold Goldstein took an innovative approach to editing a special issue of Human Resource Management Review. They asked prominent I/O psychologists to collaborate with scholars from other disciplines to explore how advances in intelligence research might be incorporated into our understanding of the role of intelligence in the workplace.

It was an honor to be invited to participate, and it was a pleasure to be paired to work with Daniel Newman of the University of Illinois at Urbana/Champaign. Together we wrote an I/O psychology-friendly introduction to current psychometric theories of cognitive abilities, emphasizing Kevin McGrew‘s CHC theory. Before that could be done, we had to articulate compelling reasons I/O psychologists should care about assessing multiple cognitive abilities. This was a harder sell than I had anticipated.

Formal cognitive testing is not a part of most hiring decisions, though I imagine that employers typically have at least a vague sense of how bright job applicants are. When the hiring process does include formal cognitive testing, typically only general ability tests are used. Robust relationships between various aspects of job performance and general ability test scores have been established.

In comparison, the idea that multiple abilities should be measured and used in personnel selection decisions has not fared well in the marketplace of ideas. To explain this, there is no need to appeal to some conspiracy of test developers. I’m sure that they would love to develop and sell large, expensive, and complex test batteries to businesses. There is also no need to suppose that I/O psychology is peculiarly infected with a particularly virulent strain of g zealotry and that proponents of multiple ability theories have been unfairly excluded.

To the contrary, specific ability assessment has been given quite a bit of attention in the I/O psychology literature, mostly from researchers sympathetic to the idea of going beyond the assessment of general ability.  Dozens (if not hundreds) of high-quality studies were conducted to test whether using specific ability measures added useful information beyond general ability measures. In general, specific ability measures provide only modest amounts of additional information beyond what can be had from general ability scores (ΔR2 ≈ 0.02–0.06). In most cases, this incremental validity was not large enough to justify the added time, effort, and expense needed to measure multiple specific abilities. Thus it makes sense that relatively short measures of general ability have been preferred to longer, more complex measures of multiple abilities.

However, there are several reasons that the omission of specific ability tests in hiring decisions should be reexamined:

  • Since the time that those high quality studies were conducted, multidimensional theories of intelligence have advanced, and we have a better sense of which specific abilities might be important for specific tasks (e.g., working memory capacity for air traffic controllers). The tests measuring these specific abilities have also improved considerably.
  • With computerized administration, scoring, and interpretation, the cost of assessment and interpretation of multiple abilities is potentially far lower than it was in the past. Organizations that make use of the admittedly modest incremental validity of specific ability assessments would likely have a small but substantial advantage over organizations that do not. Over the long run, small advantages often accumulate into large advantages.
  • Measurement of specific abilities opens up degrees of freedom in balancing the need to maintain the predictive validity of cognitive ability assessments and the need to reduce the adverse impact on applicants from disadvantaged minority groups that can occur when using such assessments. Thus, organizations can benefit from using cognitive ability assessments in hiring decisions without sacrificing the benefits of diversity.

The publishers of Human Resource Management Review have made our paper available to download for free until January 25th, 2015.

Broad Abilities in CHC Theory

Broad Abilities in CHC Theory

Standard
Cognitive Assessment

John Willis’ Comments on Reports newsletter makes me happy.

Whenever I find that John Willis has posted a new edition of his Comments on Reports newsletter, I read it greedily and gleefully. Each newsletter is filled with sharp-witted observations, apt quotations, and practical wisdom about writing better psychological evaluation reports.

Recent gems:

From #251

The first caveat of writing reports is that readers will strive mightily to attach significant meaning to anything we write in the report. The second caveat is that readers will focus particularly on statements and numbers that are unimportant, potentially misleading, or — whenever possible — both. This is the voice of bitter experience.

Also from #251

Planning is so important that people are beginning to indulge in “preplanning,” which I suppose is better than “postplanning” after the fact. One activity we often do not plan is evaluations.

From #207:

I still recall one principal telling the entire team that, if he could not trust the spelling in my report, he could not trust any of the information in it. This happened recently (about 1975), so it is fresh in my mind. Names of tests are important to spell correctly. Alan and Nadeen Kaufman spell their last name with a single f and only one n. David Wechsler spelled his name as shown, never as Weschler. The American version of the Binet-Simon scale was developed at Stanford University, not Standford. I have to keep looking it up, but it is Differential Ability Scales even though it is a scale for several abilities. Richard Woodcock may, for all I know, have attended the concert, but his name is not Woodstock.

Standard
Cognitive Assessment

Three inspired presentations at the Richard Woodcock Institute Event

On 10/24/2014, I attended the Richard Woodcock Institute Event hosted at University of Texas at Austin. The three speakers had strikingly different presentation styles but were equally excellent.

Dick Woodcock gave the opening remarks. I loved hearing about the twists and turns of his career and how he made the most of unplanned opportunities. It was rather remarkable how diverse his contributions are (including an electronic Braille typewriter). Then he stressed the importance of communicating test results in ways that non-specialists can understand. He speculated on what psychological testing will look like in the future, focusing on integrative software that will guide test selection and interpretation in more sophisticated ways than has hitherto been possible. Given that he has been creating the future of assessment for decades now, I am betting that he is likely to be right. Later he graciously answered my questions about the WJ77 and how he came up with what I consider to be among the most ingenious test paradigms we have.

After a short break, Kevin McGrew gave us a wild ride of a talk about advances in CHC Theory. Actually it was more like a romp than a ride. I tried to keep track of all the interesting ideas for future research he presented but there were so many I quickly lost count. The visuals were stunning and his energy was infectious. He offered a quick overview of new research from diverse fields about the overlooked importance of auditory processing (beyond the usual focus on phonemic awareness). Later he talked about his evolving conceptualization of the memory factors in CHC theory and role of complexity in psychometric tests. My favorite part of the talk was a masterful presentation of information processing theory, judiciously supplemented with very clever animations.

After lunch, Cathy Fiorello gave one of the most thoughtful presentations I have ever heard. Instead of contrasting nomothetic and idiographic approaches to psychological assessment, Cathy stressed their integration. Most of the time, nomothetic interpretations are good first approximations and often are sufficient. However, there are certain test behaviors and other indicators that a more nuanced interpretation of the underlying processes of performance is warranted. Cathy asserted (and I agree) that well trained and highly experienced practitioners can get very good at spotting unusual patterns of test performance that completely alter our interpretations of test scores. She called on her fellow scholars to develop and refine methods of assessing these patterns so that practitioners do not require many years of experience to develop their expertise. She was not merely balanced in her remarks—lip service to a sort of bland pluralism is an easy and rather lazy trick to seem wise. Instead, she offered fresh insight and nuance in her balanced and integrative approach to cognitive and neuropsychological assessment. That is, she did the hard work of offering clear guidelines of how to integrate nomothetic and idiographic methods, all the while frankly acknowledging the limits of what can be known.

Standard
CHC Theory, Cognitive Assessment

Exploratory model of cognitive predictors of academic skills that I presented at APA 2014

I have many reservations about this model of cognitive predictors of academic abilities that I presented at APA today (along with co-presenters Lee Affrunti, Renée Tobin, and Kimberley Collins) but I think that it illustrates an important point: prediction and explanation of cognitive and academic abilities is so complex that it is impossible to do in one’s head. Eyeballing scores and making pronouncements is not likely to be accurate and will result in misinterpretations. We need good software that can manage the complex calculations for us. We can still think creatively in the diagnostic process but the creativity must be grounded in realistic probabilities.

The images from the poster are from a single exploratory model based on a clinical sample of 865 college students. The model was so big and complex I had to split the path diagram into two images:

Exploratory Model of WAIS and WJ III cognitive subtests

Exploratory Model of WAIS and WJ III cognitive subtests. Gc = Comprehension/Knowledge, Ga = Auditory processing, Gv = Visual processing, Gl = Long-term memory: Learning, Gr = Long-term memory: Retrieval speed, Gs = Processing speed, MS = Memory span, Gwm = Working memory capacity, g = Anyone’s guess

Exploratory model of cognitive predictors of WJ III academic subtests

Exploratory model of cognitive predictors of WJ III academic subtests. Percentages in error terms represent unexplained variance.

Standard
Cognitive Assessment

Cognitive profiles are rarely flat.

Because cognitive abilities are positively correlated, there is an assumption that cognitive abilities should be evenly developed. When psychologists examine cognitive profiles, they often describe any features that deviate from the expected flat profile.

It is true, mathematically, that the expected profile IS flat. However, this does not mean that flat profiles are common. There is a very large set of possible profiles and only a tiny fraction are perfectly flat. Profiles that are nearly flat are not particularly common, either. Variability is the norm.

Sometimes it helps to get a sense of just how uneven cognitive profiles typically are. That is, it is good to fine-tune our intuitions about the typical profile with many exemplars. Otherwise it is easy to convince ourselves that the reason that we see so many interesting profiles is that we only assess people with interesting problems.

If we use the correlation matrix from the WAIS-IV to randomly simulate multivariate normal profiles, we can see that even in the general population, flat, “plain-vanilla” profiles are relatively rare. There are features that draw the eye in most profiles.

WAISIVProfilesIf cognitive abilities were uncorrelated, profiles would be much more uneven than they are. But even with moderately strong positive correlations, there is still room for quite a bit of within-person variability.

Let’s see what happens when we look at profiles that have the exact same Full Scale IQ (80, in this case). The conditional distributions of the remaining scores are seen in the “violin” plots. There is still considerable diversity of profile shape even though the Full Scale IQ is held constant.

WAISIVProfiles80Note that the supplemental subtests have wider conditional distributions because they are not included in the Full Scale IQ, not necessarily because they are less g-loaded.

Standard