CHC Theory, Cognitive Assessment

Intelligence and the Modern World of Work: A Special Issue of Human Resource Management Review

Charles Scherbaum and Harold Goldstein took an innovative approach to editing a special issue of Human Resource Management Review. They asked prominent I/O psychologists to collaborate with scholars from other disciplines to explore how advances in intelligence research might be incorporated into our understanding of the role of intelligence in the workplace.

It was an honor to be invited to participate, and it was a pleasure to be paired to work with Daniel Newman of the University of Illinois at Urbana/Champaign. Together we wrote an I/O psychology-friendly introduction to current psychometric theories of cognitive abilities, emphasizing Kevin McGrew‘s CHC theory. Before that could be done, we had to articulate compelling reasons I/O psychologists should care about assessing multiple cognitive abilities. This was a harder sell than I had anticipated.

Formal cognitive testing is not a part of most hiring decisions, though I imagine that employers typically have at least a vague sense of how bright job applicants are. When the hiring process does include formal cognitive testing, typically only general ability tests are used. Robust relationships between various aspects of job performance and general ability test scores have been established.

In comparison, the idea that multiple abilities should be measured and used in personnel selection decisions has not fared well in the marketplace of ideas. To explain this, there is no need to appeal to some conspiracy of test developers. I’m sure that they would love to develop and sell large, expensive, and complex test batteries to businesses. There is also no need to suppose that I/O psychology is peculiarly infected with a particularly virulent strain of g zealotry and that proponents of multiple ability theories have been unfairly excluded.

To the contrary, specific ability assessment has been given quite a bit of attention in the I/O psychology literature, mostly from researchers sympathetic to the idea of going beyond the assessment of general ability.  Dozens (if not hundreds) of high-quality studies were conducted to test whether using specific ability measures added useful information beyond general ability measures. In general, specific ability measures provide only modest amounts of additional information beyond what can be had from general ability scores (ΔR2 ≈ 0.02–0.06). In most cases, this incremental validity was not large enough to justify the added time, effort, and expense needed to measure multiple specific abilities. Thus it makes sense that relatively short measures of general ability have been preferred to longer, more complex measures of multiple abilities.

However, there are several reasons that the omission of specific ability tests in hiring decisions should be reexamined:

  • Since the time that those high quality studies were conducted, multidimensional theories of intelligence have advanced, and we have a better sense of which specific abilities might be important for specific tasks (e.g., working memory capacity for air traffic controllers). The tests measuring these specific abilities have also improved considerably.
  • With computerized administration, scoring, and interpretation, the cost of assessment and interpretation of multiple abilities is potentially far lower than it was in the past. Organizations that make use of the admittedly modest incremental validity of specific ability assessments would likely have a small but substantial advantage over organizations that do not. Over the long run, small advantages often accumulate into large advantages.
  • Measurement of specific abilities opens up degrees of freedom in balancing the need to maintain the predictive validity of cognitive ability assessments and the need to reduce the adverse impact on applicants from disadvantaged minority groups that can occur when using such assessments. Thus, organizations can benefit from using cognitive ability assessments in hiring decisions without sacrificing the benefits of diversity.

The publishers of Human Resource Management Review have made our paper available to download for free until January 25th, 2015.

Broad Abilities in CHC Theory

Broad Abilities in CHC Theory

CHC Theory

Fluid and Crystallized Intelligence in the Classroom and on the Job

Fluid intelligence is the ability to solve unfamiliar problems using logical reasoning. It requires the effortful control of attention to understand what the problem is and to work toward a logically sound answer. People with high fluid intelligence are able to figure out solutions to problems with very little instruction. Once they have found a good solution to a problem, they are able to see how it might apply to other similar problems. People with low fluid intelligence typically need hands-on, structured instruction to solve unfamiliar problems. Once they have mastered a certain skill or solution to a problem, they may have trouble seeing how it might apply in other situations. That is, their newfound knowledge does not generalize easily to other situations.

— Schneider & McGrew (2013, p. 772)

Gf in the Classroom and on the Job

Crystallized intelligence is acquired knowledge. When people solve important problems for the first time, they typically remember how they did it. The second time the problem is encountered, the solution is retrieved from memory rather than recreated anew using fluid intelligence. However, much of what constitutes crystallized intelligence is not the memory of solutions we personally have generated but the acquisition of the cumulative wisdom of those who have gone before us. That is, we are the intellectual heirs of all of the savants and geniuses throughout history. What they achieved with fluid intelligence adds to our crystallized intelligence. This is why even an average engineer can design machines that would have astounded Galileo, or even Newton. It is why ordinary high school students can use algebra to solve problems that baffled the great Greek mathematicians (who, for lack of a place-holding zero, could multiply large numbers only very clumsily).

Crystallized intelligence, broadly speaking, consists of one’s understanding of the richness and complexity of one’s native language and the general knowledge that members of one’s culture consider important. Of all the broad abilities, crystallized intelligence is by far the best single predictor of academic and occupational success. A person with a rich vocabulary can communicate more clearly and precisely than a person with an impoverished vocabulary. A person with a nuanced understanding of language can understand and communicate complex and subtle ideas better than a person with only a rudimentary grasp of language. Each bit of knowledge can be considered a tool for solving new problems. Each fact learned enriches the interconnected network of associations in a person’s memory. Even seemingly useless knowledge often has hidden virtues. For example, few adults know who Gaius and Tiberius Gracchus were (Don’t feel bad if you do not!). However, people who know the story of how they tried and failed to reform the Roman Republic are probably able to understand local and national politics far better than equally bright people who do not. It is not the case that ignorance of the Gracchi brothers dooms anyone to folly. It is the case that a well-articulated story from history can serve as a template for understanding similar events in the present.

— Schneider & McGrew (2013, pp. 772–773)

Gc in the Classroom and on the Job
Gf Gc Typology

The pictures are previously unpublished (and not to be taken too seriously).

Definitions from:

Schneider, W. J. & McGrew, K. S. (2013). Cognitive performance models: Individual differences in the ability to process information. In S. Ortiz & D. Flanagan (Sec. Eds.), Section 9: Assessment Theory, in B. Irby, G. Brown, & R. Laro-Alecio & S. Jackson (Vol Eds.), Handbook of educational theories (pp. 767–782). Charlotte, NC: Information Age Publishing.

CHC Theory, Cognitive Assessment, R

Interactive 3D Multidimensional Scaling of the WJ III

I have been playing around with interactive 3D images (with the rgl package in R) and thought that it would be fun to present a multidimensional scaling (MDS) of the WJ III NU. Kevin McGrew has produced a number of beautiful images with MDS. My favorite is this one, not just because it is gorgeous, but because of the theoretical insights it communicates.

I simply took the correlation matrix from the WJ III NU standardization sample (ages 9 to 13) and subtracted each correlation from 1 to produce a distance measure. I performed classic MDS in R with the cmdscale function, allowing 3 dimensions. I colored each test with my guess as to which CHC factor it belongs.

If you click the static image below, you can play with it (Firefox and Chrome worked for me but Internet Explorer and Safari did not.):



R code used to generate this image

CHC Theory, Cognitive Assessment

Is g an ability?

Here is an excerpt from an early draft of the forthcoming chapter I wrote with Kevin McGrew. Almost of all of this section was removed because the chapter was starting to look like it was going to be over 200 pages. Editing the chapter down to 100 pages was painful and many parts we liked were removed:
Is g an ability?

The controversy about the theoretical status of g may have less fire and venom if some misunderstandings are cleared up. First, Spearman did not believe that performance on tests was affected by g and only g. In a review of a book by his rival Godfrey Thomson, Spearman (1940, p. 306) clarified his position.

“For I myself, no less than Thomson, accept the hypothesis that the observed test-scores, and therefore their correlations, derive originally from a great number of small causes; as genes, neurones, etc. Indeed this much seems to be accepted universally. We only disagree as to the way in which this derivation is to be explained.”

Second, Spearman (1927, p. 92) always maintained, even in his first paper about g (Spearman, 1904, p. 284), that g might consist of more than one general factor. Cattell (1943) noted that this was an anticipation of Gf-Gc Theory. Third, Spearman did not consider g to be an ability, or even a thing. Yes, you read that sentence correctly. Surprisingly, neither does Arthur Jensen, perhaps the most (in)famous living proponent of Spearman’s theory. Wait! The paper describing the discovery of g was called “‘General Intelligence’: Objectively Determined and Measured.” Surely this means that Spearman believed that g was general intelligence. Yes, but not really. Spearman thought it unproductive to equate g with intelligence, the latter being a complex amalgamation of many abilities (Jensen, 2000). Spearman believed that “intelligence” is a folk concept and thus no one can say anything scientific about it because everyone can define it whichever way they wish. Contemplating the contradictory definitions of intelligence moved Spearman (1927, p. 14) to erupt,

“Chaos itself can go no farther! The disagreement between different testers—indeed, even between the doctrine and the practice of the selfsame tester—has reached its apogee. […] In truth, ‘intelligence’ has become a mere vocal sound, a word with so many meanings that finally it has none.”

Spearman had a much more subtle conceptualization of g than many critics give him credit for. In discussing the difficulty of equating g with intelligence, or variations of that word with more precise meanings such as abstraction or adaptation, Spearman (1927, p.88) explained,

“Even the best of these renderings of intelligence, however, always presents one serious general difficulty. This is that such terms as adaptation, abstraction, and so forth denote entire mental operations; whereas our g, as we have seen, measures only a factor in any operation, not the whole of it.”

At a conference in which the proceedings were published in an edited volume (Bock, Goode, & Webb, 2000), Maynard Smith argued that there isn’t a thing called athletic ability but rather it is a performance category. That is, athletic ability would have various components such as heart volume, muscle size, etc. Smith went on to argue that g, like athletic ability, is simply a correlate that is statistically good at predicting performance. Jensen, in reply, said, “No one who has worked in this field has ever thought of g as an entity or thing. Spearman, who discovered g, actually said the very same thing that you’re saying now, and Cyril Burt and Hans Eysenck said that also: just about everyone who has worked in this field has not been confused on that point.” (Bock, Goode, & Webb, 2000, p. 29). In a later discussion at the same conference, Jensen clarified his point by saying that g is not a thing but is instead the total action of many things. He then listed a number of candidates that might explain why disparate regions and functions of the brain tend to function at a similar level within the same person such as the amount of myelination of axons, the efficiency of neural signaling, and the total number of neurons in the brain (Bock, Goode, & Webb, 2000, p. 52). Note that none of these hypotheses suggest that g is an ability. Rather, g is what makes abilities similar to each other within a particular person’s brain.

In Jensen’s remarks, all of the influences on g were parameters of brain functioning. We can extend Jensen’s reasoning to environmental influences with a thought experiment. Suspend disbelief for a moment and suppose that there is only one general influence on brain functioning: lead exposure. Because of individual differences in degree of lead exposure, all brain functions are positively correlated and thus a factor analysis would find a psychometric g-factor. Undoubtedly, it would be a smaller g-factor than is actually observed but it would exist.

In this thought experiment, g is not an ability. It is not lead exposure itself, but the effect of lead exposure. There is no g to be found in any person’s brain. Instead, g is a property of the group of people tested. Analogously, a statistical mean is not a property of individuals but a group property (Bartholomew, 2004). This hypothetical g emerges because lead exposure influences all of the brain at the same time and because some people are exposed to more lead than are others.

In the thought experiment above, the assumptions were unrealistically simple and restrictive. It is certain that individual differences in brain functioning is influenced in part by genetic differences among individuals and that some genetic differences affect almost all cognitive abilities (Exhibit A: Down Syndrome). Some genetic differences affect some abilities more than others (e.g., William’s Syndrome, caused by a deletion of about 26 genes on chromosome 7, is associated with impaired spatial processing but relatively intact verbal ability). Thus, there are general genetic influences on brain functioning and there are genetic differences that affect only a subset of brain functions.

The fact that there are some genetic differences with general effects on cognitive ability (and there are probably many) is enough to produce at least a small g-factor, and possibly a large one. However, there are many environmental effects that affect most aspects of cognitive functioning. Lead exposure is just one of many toxins that likely operate this way (e.g., mercury & arsenic). There are viruses and other pathogens that infect the brain more or less indiscriminately and thus have an effect on all cognitive abilities. Many head injuries are relatively focal (e.g., microstrokes and bullet wounds) but others are more global (e.g., large strokes and blunt force trauma) and thus increase the size of psychometric g. Poor nutrition probably hampers the functioning of individual neurons indiscriminately but the systems that govern the most vital brain functions have more robust mechanisms and greater redundancy so that temporary periods of extreme malnourishment affect some brain functions more than others. Even when you are a little hungry, the first abilities to suffer are highly g-loaded and evolutionarily new abilities such as working memory and controlled attention.

Societal forces probably also increase the size of psychometric g. Economic inequality ensures that some people will have more of everything that enhances cognitive abilities and more protection from everything that diminishes them. This means that influences on cognitive abilities that are not intrinsically connected (e.g., living in highly polluted environments, being exposed to water-borne parasites, poor medical care, poor schools, cultural practices that fail to encourage excellence in cognitively demanding domains, reduced access to knowledgeable mentors among many many others) are correlated. Correlated influences on abilities cause otherwise independent cognitive abilities to be correlated, increasing the size of psychometric g. How much any of these factors increase the size of psychometric g (if at all) is not yet known. The point is that just because abilities are influenced by a common cause, does not mean that the common cause is an ability.

There are two false dichotomies we should be careful to avoid. The first is the distinction between nature and nurture. There are many reasons that genetic and environmental effects on cognitive abilities might be correlated, including the possibility that genes affect the environment and the possibility that the environment alters the effect of genes. The second false choice is the notion that either psychometric g is an ability or it is not. Note that if we allow that some of psychometric g is determined by things that are not abilities, it does not mean that there are no truly general abilities (e.g., working memory, processing speed, fluid intelligence, and so forth). Both types of general influences on abilities can be present.

In this section, we have argued that not even the inventor of g considered it to be an ability. Why do so many scholars write as if Spearman believed otherwise? In truth, he (and Jensen as well) often wrote in a sort of mental shorthand as if g were an ability or a thing that a person could have more of or less of. Cattell (1943) gives this elegantly persuasive justification:

Obviously “g” is no more resident in the individual than the horsepower of a car is resident in the engine. It is a concept derived from the relations between the individual and his environment. But what trait that we normally project into and assign to the individual is not? The important further condition is that the factor is not determinable by the individual and his environment but only in relation to a group and its environment. A test factor loading or an individual’s factor endowment has meaning only in relation to a population and an environment. But it is difficult to see why there should be any objection to the concept of intelligence being given so abstract a habitation when economists, for example, are quite prepared to assign to such a simple, concrete notion as “price” an equally relational existence. (p. 19)

Cognitive Assessment, Psychometrics, Tutorial

Real Composite Scores vs. Averaged Pseudo-Composite Scores

Kevin McGrew and I wrote a position paper about the factors that influence the inaccuracy of averaged pseudo-composite scores compared to real composite scores. Averaged pseudo-composites are simple averages of test scores. For example, if a person scores 6 on WISC-IV Digit Span and 4 on WISC-IV Letter-Number Sequencing, an averaged pseudo-composite score measuring “Working Memory” would be 5, in the index score metric, 75. In truth, the real composite score should be lower than 75, depending on the correlation between the two subtests.

The paper can be downloaded here.

Disclaimer: The paper is not yet peer-reviewed.

Cognitive Assessment, Compositator, My Software & Spreadsheets, Psychometrics

The Compositator: New Software for the Woodcock-Johnson III

I am very excited to announce that the Woodcock-Muñoz Foundation’s WMF Press has published my FREE software program with an admittedly silly name:

The Compositator!

Its purpose is anything but silly, though. The feature that gives it its name is that you can create custom composite scores from any test in three WJ III batteries (Cognitive, Achievement, and Diagnostic Supplement).

For example, Picture Vocabulary and Academic Knowledge from the WJ III Achievement battery can be combined with Verbal Comprehension and General Information from the WJ III Cognitive battery to form a more reliable and more comprehensive measure of crystallized intelligence.

The Compositator is a supplement to the scoring software for the WJ III (either the WJ III Compuscore and Profiles Program or the WIIIP). It will not run on a machine unless one of these programs is installed.

The Compositator not only allows you to combine subtests in a psychometrically sound manner, it allows you to combine statistical information in ways not previously possible. For example, it allows you to create a comprehensive model of reading ability, specifying the relationships among all the various cognitive and academic abilities in the WJ III. From there, you can do things like estimate how much a person’s reading comprehension would improve if auditory processing were remediated. If auditory processing is remediated, what are the simultaneous effects on reading decoding, reading fluency, and reading comprehension?

There is much more that the program can do. I’ve been working on this program for the past 3 years and have been thinking about it since 1999 when I was in graduate school. There is a comprehensive manual and video tutorials to get you started.

Kevin McGrew has been the earliest and most enthusiastic supporter of the program. His generous description of the program is here.

I hope that you find the program useful. I would love to hear from you, especially if you have ideas for improving the program.