Cognitive Assessment, Principles of assessment of aptitude and achievement

Dr. Procrustes does not need to see you; he has your test scores.

I rock at the Tower of Hanoi—you could give me a stack of as many discs as you like and I can move the whole stack from one peg to the other without any hesitation and without a single error. I don’t mean to be immodest about it, but it’s true. My performance is like 11.8 standard deviations above the mean, which by my calculations is so rare that if a million people were born every second ever since the Big Bang, there is still only a 2.7% chance that I would have been born by now—I feel very lucky (and honored) to be here.

You would be forgiven for thinking that I had excellent planning ability…but not if you voiced such an opinion out loud, within earshot of my wife, causing her to die of laughter—I would miss her very much. No, it is not by preternatural planning ability that I compete with only the gods in Tower of Hanoi tournaments-in-the-sky. In fact, the first time I tried it, my score was not particularly good. I am not going say what it was but the manual said that I ranked somewhere between the average Darwin Award winner and the person who invented English spelling rules. After giving the test some thought, however, I realized that each movement of the discs is mechanically determined by a simple rule. I will not say what the rule is for fear of compromising the validity of the test for more people. The rule is not so simple that you would figure it out while taking the test for the first time, but it is simple enough that once you learn it, you will be surprised how easy the test becomes.

All kidding aside, it is important for the clinician to be mindful of the process by which a child performs well or poorly on a test. For me, the Tower of Hanoi does not measure planning. For others, it might. Edith Kaplan (1988) was extremely creative in her methods of investigating how people performed on cognitive tests. Kaplan-inspired tools such as the WISC-IV Integrated provide more formal methods of assessing strategy use. However, careful observations and even simply asking children how they approached a task (after the tests have been administered according to standard procedures) is often enlightening and can save time during the follow-up testing phase. For example, I once read about an otherwise low-performing boy who scored very well on the WISC-IV Block Design subtest. When asked how he did so well on it, he said that he had the test at home and that he practiced it often. The clinician doubted this very much but his story turned out to be true! His mother was an employee at a university and saw someone from the Psychology Department throwing outdated WISC-III test kits into the garbage. She was intrigued and took one home for her children to play with.

I once gave the WAIS-III to a woman who responded to the WAIS-III Vocabulary subtest as if it were a free association test. I tried to use standard procedures to encourage her to give definitions to words but the standard prompts (“Tell me more”) just made it worse. Finally, I broke with protocol and said, “These are fabulous answers and I like your creativity. However, I think I did not explain myself very well. If you were to look up this word in the dictionary, what might it say about what the word means?” In the report I noted the break with protocol but I believe that the score she earned was much more reflective of her Lexical Knowledge than would have been the case had I followed procedures more strictly. I do not wish to be misunderstood, however; I never deviate from standard procedures except when I must. Even then, I conduct additional follow-up testing to make sure that the scores are correct.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.

Advertisements
Standard
CHC Theory, Cognitive Assessment, Principles of assessment of aptitude and achievement

Explaining the difference between vision and visual-spatial processing to parents.

Vision is the ability to see something and visual-spatial processing helps you make sense of what you see.

Vision is the ability to see what is there. Visual-spatial processing is the ability to see what is not there, too, in a sense. With good vision you can see what something looks like; with good visual-spatial processing you can imagine what something would look like if you turned it around or if you were standing somewhere else or if something else was covering part of it.

With good vision you can see objects; with good visual-spatial processing you can see how they might fit together.

With good vision you can see a bunch of lines and splotches of colors; with good visual-spatial processing you can see how those lines and splotches of color form meaningful patterns.

This is the ability that sculptors, painters, designers, engineers, and architects need. It comes in pretty handy for the rest of us too.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Aptitudes and Achievement: Definitions, Distinctions, and Difficulties

Achievement typically refers to knowledge and skills that are formally taught in academic settings. However, this definition of achievement can be broadened to include any ability that is valued and taught in a particular cultural setting (e.g., hunting, dancing, or computer programming). Aptitude refers to an individual’s characteristics that indicate the potential to develop a culturally valued ability, given the right circumstances. The difference between aptitudes and achievement at the definitional level is reasonably clear. However, at the measurement level, the distinction becomes rather murky.

Potential, which is latent within a person, is impossible to observe directly. It must be inferred by measuring characteristics that either are typically associated with an ability or are predictive of the future development of the ability. Most of the time, aptitude is assessed by measuring abilities that are considered to be necessary precursors of achievement. For example, children who understand speech have greater aptitude for reading comprehension than do children who do not understand speech. Such precursors may themselves be a form of achievement. For example, it is possible for researchers to consider students’ knowledge of history as an outcome variable that is intrinsically valuable. However, some researchers may measure knowledge of history as a predictor of being able to construct a well-reasoned essay on politics. Thus, aptitude and achievement tests are not distinguished by their content but by how they are used. If we use a test to measure current mastery of a culturally valued ability, it is an achievement test. If we use a test to explain or forecast mastery of a culturally valued ability, it is an aptitude test.

IQ tests are primarily used as aptitude tests. However, an inspection of the contents of most IQ tests reveals that many test items could be repurposed as items in an achievement test (e.g., vocabulary, general knowledge, and mental arithmetic items). Sometimes the normal roles of reading tests and IQ tests are reversed, such as when neuropsychologists estimate loss of function following a brain injury by comparing current IQ to performance on a word-reading test.

A simple method to distinguish between aptitude and achievement is to ask, “Do I care about whether a child has the ability measured by this test because it is inherently valuable or because it is associated with some other ability (the one that I actually care about)?” Most people want children to be able to comprehend what they read. Thus, reading tests are typically achievement tests. Most people are not particularly concerned about how well children can reorder numbers and letters in their heads. Thus, the WISC-IV Number-Letter Sequencing subtest is typically used as an aptitude test, presumably because the ability it measures is a necessary component of being able to master algebra, program computers, follow the chain of logic presented by debating candidates, and other skills that people in our culture care about.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.

Standard
Cognitive Assessment, Compositator, My Software & Spreadsheets, Psychometrics

The Compositator: New Software for the Woodcock-Johnson III

I am very excited to announce that the Woodcock-Muñoz Foundation’s WMF Press has published my FREE software program with an admittedly silly name:

The Compositator!

Its purpose is anything but silly, though. The feature that gives it its name is that you can create custom composite scores from any test in three WJ III batteries (Cognitive, Achievement, and Diagnostic Supplement).

For example, Picture Vocabulary and Academic Knowledge from the WJ III Achievement battery can be combined with Verbal Comprehension and General Information from the WJ III Cognitive battery to form a more reliable and more comprehensive measure of crystallized intelligence.

The Compositator is a supplement to the scoring software for the WJ III (either the WJ III Compuscore and Profiles Program or the WIIIP). It will not run on a machine unless one of these programs is installed.

The Compositator not only allows you to combine subtests in a psychometrically sound manner, it allows you to combine statistical information in ways not previously possible. For example, it allows you to create a comprehensive model of reading ability, specifying the relationships among all the various cognitive and academic abilities in the WJ III. From there, you can do things like estimate how much a person’s reading comprehension would improve if auditory processing were remediated. If auditory processing is remediated, what are the simultaneous effects on reading decoding, reading fluency, and reading comprehension?

There is much more that the program can do. I’ve been working on this program for the past 3 years and have been thinking about it since 1999 when I was in graduate school. There is a comprehensive manual and video tutorials to get you started.

Kevin McGrew has been the earliest and most enthusiastic supporter of the program. His generous description of the program is here.

I hope that you find the program useful. I would love to hear from you, especially if you have ideas for improving the program.

Standard