CHC Theory, Cognitive Assessment, Principles of assessment of aptitude and achievement

IQ Tests: Is Knowledge of Useless Knowledge Useless?

Now I wish I could write you a melody so plain

That could hold you dear lady from going insane

That could ease you and cool you and cease the pain

Of your useless and pointless knowledge

– Bob Dylan “Tombstone Blues”

There is much pleasure to be gained from useless knowledge.

– Bertrand Russell

When critics look though the items in a general verbal information test, they, with some justification, sometimes sneer at the usefulness of the content. Is there any money in being able to list off the names of the planets? Can I oppose injustice, armed with my knowledge of state capitals? Will any babies be saved because I know who Julian the Apostate was? Probably not.

Many (most?) facts I have learned are unlikely to ever be of practical use. If I knew which ones they were, I might happily surrender them to forgetfulness. However, because it is impossible to know what might be useful in the future, I will hang onto my useless and pointless knowledge for a little while longer, thank you very much.

When Francis Bacon wrote parenthetically that “knowledge itself is a power…” in the context of an argument attempting to discredit the theological beliefs of certain religious sects, he probably did not mean the phrase in the sense that it is invoked today (i.e., that knowledge confers power). However, the phrase “knowledge is power” has survived because it resonates with our experience and pithily expresses something that is increasingly true in an age that gives increasing returns to those who can profit from information.

Good items in a test of General Information should not be about random facts. Easy items should not be merely easy (e.g., “What is the color of the sky?”). Rather, they should test for knowledge of information considered essential for living independently in our culture. A person who does not understand why dishes should be washed is not ready to live unsupervised. More difficult items should not be merely difficult (e.g., “What is the largest city in Svalbard? How many teeth does an orca whale have?”). Rather, they should measure knowledge that is relevant to what is considered core aspects of our culture (e.g., “Why do banks loan people money? Why do people still learn Latin and ancient Greek? Who was Isaac Newton? What is the purpose of the United Nations?”).

Just as language development consists of many narrow abilities, there are many sub-categories in General Information. Typically these sub-categories consist of academic domains such as knowledge of the humanities and knowledge of the sciences. These categories have further subdivisions (e.g., physics, chemistry, biology, and so forth – and each of these, in turn have further subdivisions).

General Information consists of knowledge that each person in a culture is expected to be familiar with (or would be admired if he or she knew). However, much (if not most) of a person’s knowledge is not of this sort. For example, although it is expected that everyone in this culture should know what airplanes are, only pilots are expected to know how to fly them. In CHC Theory, knowledge that is expected to be known only to members of a particular profession or enthusiasts of a particular hobby, sport, or other activity is classified as Domain-Specific Knowledge (Gkn). Most subject-specific academic achievement tests (e.g., European History, Geology, Contemporary American Literature) would be considered measures of Gkn, not Gc. That is, typically (but not always) achievement measures are the relevant outcomes we wish to explain, not explanatory aptitudes. In contrast, measures of General Information (e.g., WISC-IV Information) are intended to be estimates of the body of knowledge from which a person can draw to solve a wide range of problems.

Like Lexical Knowledge, General Information has a bi-directional relationship with reading comprehension. Very little of what is written is fully self-contained; authors presume that readers have considerable background knowledge and often do not bother to explain themselves. Drout (2006) describes how difficult and amusing it is to explain to non-native speakers of English what newspaper headlines such as “Under Fire from G.O.P., White House Hunkers Down” mean.[1] Children who know more understand more of what they read. Understanding more makes reading more enjoyable. Reading more exposes children to more knowledge, much of which is inaccessible via oral culture.

[1]Why would anyone be under a fire?

It means being shot at.

People are shooting at the White House?

No, it is just a vivid way of saying that the G.O.P. is criticizing the administration, which is symbolized by the White House, where the president lives.

Who is the G.O.P.?

It stands for the Grand Old Party.

If they are old, why haven’t I heard of them?

They are the Republicans.

Oh! Is the Democratic Party the new party?

No, they have been around longer than the Republicans. The nickname GOP was first used when the party was only a few decades old.

I don’t understand.

I don’t either, really. I just know that “old” is an affectionate way of describing something you have liked for a long time.

Could I say that I enjoy old ice cream?

No, that doesn’t sound right. You should probably just avoid using “old” that way.

What does “hunker”mean?

I really have no idea. I just know that when you are under fire, you should hunker down.

If you don’t know what it means, how do you know what to do?

True! I just looked it up. It means “to squat, to sit on your haunches.” I guess it all makes sense now.

So, when the Republicans criticize the president, he sits on his haunches?

That is an amusing image, but no. It means that you stick to your guns…er…I mean…

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.



CHC Theory, Cognitive Assessment

Crystallized Intelligence, Visualized

CHC Theory, Cognitive Assessment, Principles of assessment of aptitude and achievement

Why do IQ tests measure vocabulary?

If Lexical Knowledge (understanding of words and their uses) is simply memorizing the definitions of fancy words, then, at best, it is a trivial ability valued by academics, pedants, and fuddy-duddies. At worst, its elevation by elitists is a tool of oppression. There is some truth to these views of Lexical Knowledge but they are myopic. I will argue that vocabulary tests are rightfully at the center of most assessments of language and crystallized intelligence. Some words have the power to open up new vistas of human experience. For example, when I was thirteen, learning the word “ambivalence” clarified many aspects of interpersonal relationships that were previously baffling.

A word is an abstraction. The need for labels of simple categories is perfectly clear. Knowing the word anger (or its equivalent in any other language) frees us from having to treat each encounter with the emotion as a unique experience. Being able to communicate with others about this abstract category of experience facilitates self-awareness and the understanding of interpersonal relations. We can build up a knowledge base of the sorts of things that typically make people angry and the kinds of reactions to expect from angry people.

It is less obvious why anger has so many synonyms and near-synonyms, some of which are a bit obscure (e.g., iracund, furibund, and zowerswopped!). Would it not be easier to communicate if there were just one word for every concept? It is worthwhile to consider the question of why words are invented. At some point in the history of a language, a person thought that it would be important to distinguish one category of experience from others and that this distinction merited a single word. Although most neologisms are outlived even by their inventors, a few of them are so useful that they catch on and are used by enough people for enough time that they are considered “official words” and are then taken for granted as if they had always existed.[1] That is, people do not adopt new words with the primary goal of impressing one another. They do it because the word succinctly captures an idea or a distinction that would otherwise be difficult or tiresome to describe indirectly. Rather than saying, “Because Shelly became suddenly angry, her sympathetic nervous system directed her blood away from her extremities toward her large muscles. One highly visible consequence of this redirection of blood flow was that her face turned white for a moment and then became discolored with splotches of red.” It is simply more economical to say that “Shelly was livid with rage.” By convention, the use of the word livid signals that Shelly is probably not thinking too clearly at the moment and that the next thing that Shelly says or does is probably going to be impulsive and possibly hurtful.

Using near synonyms interchangeably is not merely offensive to word nerds and the grammar police. It reflects, and possibly leads to, an impoverishment of thought and a less nuanced understanding of the world. For example, jealousy is often used as a substitute for envy. They are clearly related words but they are not at all the same. In fact, in a sense, they tend to be experienced by people on opposite sides of a conflicted relationship. Envy is the painful, angry awareness that someone else enjoys some (probably undeserved) advantage that we covet. Jealousy is the angry, often vigilant, suspicion we may lose our beloved to a rival. Unaware of this distinction, it would be difficult to benefit from or even make sense of the wisdom of Rochefoucauld’s observation that “Jealousy is born with love, but does not die with it.”

Lexical Knowledge is obviously important for reading decoding. If you are familiar with a word, it is easier to decode. It is also obviously important for reading comprehension. If you know what a word means, it is easier to comprehend the sentences in which it appears. It is probably the case that reading comprehension also influences Lexical Knowledge. Children who comprehend what they read are more likely to enjoy reading and thus read more. Children who read more expose themselves to words that rarely occur in casual speech but the meaning of which can be inferred from how it is used in the text. Finally, Lexical Knowledge is important for writing. Children with a rich understanding of the distinctions between words will not only be able to express what they mean more precisely, but their knowledge of certain words will enable them to express thoughts that they might not otherwise have had. For example, it seems to me unlikely that a student unfamiliar with the word “paradox” would be able to write an essay about two ideas that appear to be contradictory at first glance but at a deeper level are consistent with each other.

[1] Of course, dictionaries abound with antique words that were useful for a time but now languish in obscurity. For example, in our more egalitarian age, calling someone a cur (an inferior dog because it is of mixed breed) is not the insult that it once was. It is now used mostly for comedic effect when someone affects an aristocratic air. My favorite example of a possibly soon-to-be antique word is decadent, which is nowadays almost exclusively associated with chocolate.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

CHC Theory, Cognitive Assessment, Principles of assessment of aptitude and achievement

Fluid Intelligence, Defined

Mentioning fluid intelligence at cocktail parties as if it were a perfectly ordinary topic of conversation carries with it a certain kind of cachet that is hard to describe unless you have experienced it for yourself. Part of Gf’s mystique can be attributed to Cattell’s (1987) assertions that Gf is linked to rather grand concepts such as innate ability, genetic potential, biological intelligence, mass action, and the overall integrity of the whole brain.[1] Heady stuff indeed!

At the measurement level, Gf tests require reasoning with abstract symbols such as figures and numbers.[2] Good measures of Gf are novel problems that require mental effort and controlled attention to solve. If a child can solve the problem without much thought, the child is probably making use of prior experience. Thus, even though a test is considered a measure of fluid intelligence, it does not measure fluid intelligence to the same degree for all children. Some children have been exposed to matrix tasks and number series in school or in games. Fluid intelligence is about novel problem solving and, as Kaufman (1994, p. 31) noted, wryly pointing out the obvious, a test is only novel once. The second time a child takes the same fluid intelligence test, performance typically improves (by about 5 points or 1/3 standard deviations, Kaufman & Lichtenburger, 2006). This is why reports that fluid intelligence can be improved with training (Jaeggi, Buschkuehl, Jonides, & Perrig, 2008) cannot be taken at face value.[3] Just because performance has improved on “Gf tests” because of training does not mean that Gf is the ability that has improved.

At the core of Gf is the narrow ability of Induction. Inductive reasoning is the ability to figure out an abstract rule from a limited set of data. In a sense, inductive reasoning represents a person’s capacity to acquire new knowledge without explicit instruction. Inductive reasoning allows a person to profit from experience. That is, information and experiences are abstracted so that they can be generalized to similar situations. Deductive reasoning is the ability to apply a rule in a logically valid manner to generate a novel solution. In CHC Theory, deductive reasoning is called General Sequential Reasoning. Although logicians have exquisitely nuanced vocabularies for talking about the various sub-categories of inductive and deductive reasoning, it will suffice to say that everyday problem solving typically requires a complex mix of the two.

Inductive and deductive reasoning can be found in multiple places in CHC Theory. Whenever inductive and deductive reasoning are applied to quantitative content, they are called quantitative reasoning. For mysterious reasons, inductive and deductive reasoning with quantitative stimuli tend to cluster together in factor analyses. Inductive and deductive reasoning also make an appearance in Gc. Whenever inductive and deductive reasoning tasks rely primarily on past experience and previous knowledge, they are classified as measures of crystallized intelligence. Many researchers have supposed that the Similarities subtest on Wechsler tests contains an element of fluid reasoning because inductive reasoning is used to figure out how two things or concepts are alike. If the question is something like, “How are a dog and a cat alike?” then it is very unlikely that a child arrives at the correct answer by reasoning things out for the first time. Instead, the child makes an association immediately based on prior knowledge.

Researchers are not satisfied with accepting Gf as a given. They wish to know the origins of Gf and to understand why some people are so much more adept at abstract reasoning than other people are (Conway, Cowan, Bunting, Therriault, & Minkoff, 2002). One hypothesis that is still being explored is that fluid reasoning has a special relationship with working memory. Working memory is the ability to hold information in mind while using controlled attention to transform it in some way (e.g., rearranging the order of things or applying a computational algorithm). Many researchers have noted that tests of fluid reasoning, particularly matrix tasks (e.g., WISC-IV Matrix Reasoning), can be made more difficult by increasing the working memory load required to solve the problem. Kyllonen and Christal (1990) published the provocative finding that individual differences in Gf could be explained entirely by individual differences in working memory. Many studies have attempted to replicate these finding but have failed. Most studies find that Gf and working memory are strongly correlated (about 0.6) but are far from identical (Kane, Hambrick, Tuholski, Wilhelm, Payne, & Engle, 2004).

Just as we have distinguished between statistical g and theoretical g, it is important to note that there is a difference between the Gf that is measured by Gf tests and the Gf that is written about by theorists. Some of Cattell’s hypotheses about Gf have stood the test of time, whereas others have not held up very well. For example, the heritability of Gf is not higher than that of Gc, as Cattell’s theory predicts. I mention this because it is probably not justified to claim that because a child scores well on Gf tests, the child has high innate talent or that the child’s biological intelligence is high.

Most of the effects of Gf on academic achievement are mediated by Gc (i.e., better reasoning leads to more knowledge which leads to higher achievement). However, Gf seems to have a special relationship with complex problem solving in mathematics. Because Gf tests measure abstract reasoning, it is unsurprising that they would predict performance in an abstract domain such as mathematics (Floyd, Evans, & McGrew, 2003).

[1] Horn (1985) tended to de-emphasize the biological/genetic interpretation of fluid intelligence.

[2] Test developers have tried to create Gf measures with verbal content (e.g., WJ-R Verbal Analogies or SB5 Verbal Fluid Reasoning) but find that verbal Gf tests do not always load on the same factor as traditional Gf tests (Canivez, 2008; Woodcock, 1990). It is possible that the KAIT Logical Steps subtest may be the only commercially available verbal Gf test that does not have substantial loadings on Gc (Flanagan & McGrew, 1998; Immekus & Miller, 2010), possibly because it does not use the verbal analogy format.

[3] See Moody (2009) for a discussion of other methodological problems that may have compromised the validity of the Jaeggi et al (2008) study.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

CHC Theory, Principles of assessment of aptitude and achievement

CHC Theory: Love, Marriage, and a Giant Baby Carriage

CHC Theory is the child of two titans, Carroll’s (1993) lumbering leviathan, the Three-Stratum Theory of Cognitive Abilities and Cattell and Horn’s two-headed giant, Gf-Gc Theory (Horn & Cattell, 1964). Given that Horn was as staunchly anti-g as they come (Horn & Blankson, 2005) and that Carroll was a dedicated g-man (though not of the g-and-only-g variety; Carroll, 2003), it surprising that these theories even had a courtship much less a marriage.

From 1986 to the late 1990s, in a series of encounters initiated and chaperoned by test developer Richard Woodcock, Horn and Carroll discussed the intersections of their theories and eventually consented to have their names yoked together under a single framework (McGrew, 2005). Although the interfaith ceremony was officiated by Woodcock, the product of their union was midwifed primarily by McGrew (1997). Woodcock, McGrew and colleagues’ ecumenical approach has created a space in which mono-g-ists and poly-G-ists can engage in civil dialogue or at least ignore one another politely. CHC Theory puts g atop a three-stratum hierarchy of cognitive abilities but g’s role in the theory is such that poly-G-ists can ignore it to the degree that they see fit.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

CHC Theory, Cognitive Assessment

Cattell-Horn-Carroll (CHC) Theory of Cognitive Abilities in 3D?

While preparing for a talk a few months ago, I made this slide showing conceptual groupings of CHC Theory broad abilities. I spent a lot of time tweaking it.


Then, after working late into the night, my thoughts started becoming looser and at the time I thought that this picture was a good idea:


Don’t ask me what all the 3D effects and the projected shadows might mean. I’m not really sure what I was thinking…maybe that the speed components of ability are distinct from and yet are closely integrated with their associated the power abilities?

At any rate, it was fun to make…

CHC Theory, Cognitive Assessment

Is g an ability?

Here is an excerpt from an early draft of the forthcoming chapter I wrote with Kevin McGrew. Almost of all of this section was removed because the chapter was starting to look like it was going to be over 200 pages. Editing the chapter down to 100 pages was painful and many parts we liked were removed:
Is g an ability?

The controversy about the theoretical status of g may have less fire and venom if some misunderstandings are cleared up. First, Spearman did not believe that performance on tests was affected by g and only g. In a review of a book by his rival Godfrey Thomson, Spearman (1940, p. 306) clarified his position.

“For I myself, no less than Thomson, accept the hypothesis that the observed test-scores, and therefore their correlations, derive originally from a great number of small causes; as genes, neurones, etc. Indeed this much seems to be accepted universally. We only disagree as to the way in which this derivation is to be explained.”

Second, Spearman (1927, p. 92) always maintained, even in his first paper about g (Spearman, 1904, p. 284), that g might consist of more than one general factor. Cattell (1943) noted that this was an anticipation of Gf-Gc Theory. Third, Spearman did not consider g to be an ability, or even a thing. Yes, you read that sentence correctly. Surprisingly, neither does Arthur Jensen, perhaps the most (in)famous living proponent of Spearman’s theory. Wait! The paper describing the discovery of g was called “‘General Intelligence’: Objectively Determined and Measured.” Surely this means that Spearman believed that g was general intelligence. Yes, but not really. Spearman thought it unproductive to equate g with intelligence, the latter being a complex amalgamation of many abilities (Jensen, 2000). Spearman believed that “intelligence” is a folk concept and thus no one can say anything scientific about it because everyone can define it whichever way they wish. Contemplating the contradictory definitions of intelligence moved Spearman (1927, p. 14) to erupt,

“Chaos itself can go no farther! The disagreement between different testers—indeed, even between the doctrine and the practice of the selfsame tester—has reached its apogee. […] In truth, ‘intelligence’ has become a mere vocal sound, a word with so many meanings that finally it has none.”

Spearman had a much more subtle conceptualization of g than many critics give him credit for. In discussing the difficulty of equating g with intelligence, or variations of that word with more precise meanings such as abstraction or adaptation, Spearman (1927, p.88) explained,

“Even the best of these renderings of intelligence, however, always presents one serious general difficulty. This is that such terms as adaptation, abstraction, and so forth denote entire mental operations; whereas our g, as we have seen, measures only a factor in any operation, not the whole of it.”

At a conference in which the proceedings were published in an edited volume (Bock, Goode, & Webb, 2000), Maynard Smith argued that there isn’t a thing called athletic ability but rather it is a performance category. That is, athletic ability would have various components such as heart volume, muscle size, etc. Smith went on to argue that g, like athletic ability, is simply a correlate that is statistically good at predicting performance. Jensen, in reply, said, “No one who has worked in this field has ever thought of g as an entity or thing. Spearman, who discovered g, actually said the very same thing that you’re saying now, and Cyril Burt and Hans Eysenck said that also: just about everyone who has worked in this field has not been confused on that point.” (Bock, Goode, & Webb, 2000, p. 29). In a later discussion at the same conference, Jensen clarified his point by saying that g is not a thing but is instead the total action of many things. He then listed a number of candidates that might explain why disparate regions and functions of the brain tend to function at a similar level within the same person such as the amount of myelination of axons, the efficiency of neural signaling, and the total number of neurons in the brain (Bock, Goode, & Webb, 2000, p. 52). Note that none of these hypotheses suggest that g is an ability. Rather, g is what makes abilities similar to each other within a particular person’s brain.

In Jensen’s remarks, all of the influences on g were parameters of brain functioning. We can extend Jensen’s reasoning to environmental influences with a thought experiment. Suspend disbelief for a moment and suppose that there is only one general influence on brain functioning: lead exposure. Because of individual differences in degree of lead exposure, all brain functions are positively correlated and thus a factor analysis would find a psychometric g-factor. Undoubtedly, it would be a smaller g-factor than is actually observed but it would exist.

In this thought experiment, g is not an ability. It is not lead exposure itself, but the effect of lead exposure. There is no g to be found in any person’s brain. Instead, g is a property of the group of people tested. Analogously, a statistical mean is not a property of individuals but a group property (Bartholomew, 2004). This hypothetical g emerges because lead exposure influences all of the brain at the same time and because some people are exposed to more lead than are others.

In the thought experiment above, the assumptions were unrealistically simple and restrictive. It is certain that individual differences in brain functioning is influenced in part by genetic differences among individuals and that some genetic differences affect almost all cognitive abilities (Exhibit A: Down Syndrome). Some genetic differences affect some abilities more than others (e.g., William’s Syndrome, caused by a deletion of about 26 genes on chromosome 7, is associated with impaired spatial processing but relatively intact verbal ability). Thus, there are general genetic influences on brain functioning and there are genetic differences that affect only a subset of brain functions.

The fact that there are some genetic differences with general effects on cognitive ability (and there are probably many) is enough to produce at least a small g-factor, and possibly a large one. However, there are many environmental effects that affect most aspects of cognitive functioning. Lead exposure is just one of many toxins that likely operate this way (e.g., mercury & arsenic). There are viruses and other pathogens that infect the brain more or less indiscriminately and thus have an effect on all cognitive abilities. Many head injuries are relatively focal (e.g., microstrokes and bullet wounds) but others are more global (e.g., large strokes and blunt force trauma) and thus increase the size of psychometric g. Poor nutrition probably hampers the functioning of individual neurons indiscriminately but the systems that govern the most vital brain functions have more robust mechanisms and greater redundancy so that temporary periods of extreme malnourishment affect some brain functions more than others. Even when you are a little hungry, the first abilities to suffer are highly g-loaded and evolutionarily new abilities such as working memory and controlled attention.

Societal forces probably also increase the size of psychometric g. Economic inequality ensures that some people will have more of everything that enhances cognitive abilities and more protection from everything that diminishes them. This means that influences on cognitive abilities that are not intrinsically connected (e.g., living in highly polluted environments, being exposed to water-borne parasites, poor medical care, poor schools, cultural practices that fail to encourage excellence in cognitively demanding domains, reduced access to knowledgeable mentors among many many others) are correlated. Correlated influences on abilities cause otherwise independent cognitive abilities to be correlated, increasing the size of psychometric g. How much any of these factors increase the size of psychometric g (if at all) is not yet known. The point is that just because abilities are influenced by a common cause, does not mean that the common cause is an ability.

There are two false dichotomies we should be careful to avoid. The first is the distinction between nature and nurture. There are many reasons that genetic and environmental effects on cognitive abilities might be correlated, including the possibility that genes affect the environment and the possibility that the environment alters the effect of genes. The second false choice is the notion that either psychometric g is an ability or it is not. Note that if we allow that some of psychometric g is determined by things that are not abilities, it does not mean that there are no truly general abilities (e.g., working memory, processing speed, fluid intelligence, and so forth). Both types of general influences on abilities can be present.

In this section, we have argued that not even the inventor of g considered it to be an ability. Why do so many scholars write as if Spearman believed otherwise? In truth, he (and Jensen as well) often wrote in a sort of mental shorthand as if g were an ability or a thing that a person could have more of or less of. Cattell (1943) gives this elegantly persuasive justification:

Obviously “g” is no more resident in the individual than the horsepower of a car is resident in the engine. It is a concept derived from the relations between the individual and his environment. But what trait that we normally project into and assign to the individual is not? The important further condition is that the factor is not determinable by the individual and his environment but only in relation to a group and its environment. A test factor loading or an individual’s factor endowment has meaning only in relation to a population and an environment. But it is difficult to see why there should be any objection to the concept of intelligence being given so abstract a habitation when economists, for example, are quite prepared to assign to such a simple, concrete notion as “price” an equally relational existence. (p. 19)

CHC Theory, Cognitive Assessment

CHC Theory 2.0: Broad and Narrow Ability Definitions

I am proud to report the release of an updated set of definitions of broad and narrow abilities included in our revision of the Cattell-Horn-Carroll Theory of Cognitive Abilities. Kevin McGrew and I worked hard to update the ability definitions we present in a forthcoming chapter in the third edition of Flanagan and Harrison’s Contemporary Intelligence Assessment.