Cognitive Assessment

Classic Prose Is Simple, Not Simplistic

Simple words, carefully arranged, stick in the memory and influence action long after they have been read. Let us consider three pithy one-liners written by masters of the classic style.


Marie de Rabutin‐Chantal, Madame de Sévigné (1626– 1696)

I fear nothing so much as a man who is witty all day long.


Here Madame de Sévigné jolts us into delightful awareness of a truth we have always felt but never articulated. Furthermore, she has shown us the great honor of trusting us to apply the appropriate scope to her generalization about the dangers of too much wit. To challenge her on her wording—that chronically witty men could not possibly frighten her more than ferocious beasts, incurable disease, and invading soldiers—breaks the spell of her obvious hyperbole and displeases the Madame.

François VI
Duc de La Rochefoucauld
(1613–1680)

The refusal of praise is but the wish to be praised twice.

With maximum efficiency and minimum effort, La Rochefoucauld performs verbal jujitsu on the excessively modest. Stop making yourself the center of attention, he says. Don’t be so awkward about letting people be nice to you. Just thank the person and be done with it.

Blaise Pascal
(1623–1662)

I have made this letter longer than usual because I lack the time to make it shorter.

Pascal’s oft-quoted apology could have been utterly forgettable (e.g., “Sorry about the long letter, but I did not have enough time to edit it properly.”). It achieved immortality because Pascal has skillfully led us to expect one thing and then surprises us with another. In this manner, a rather mundane observation—that editing for brevity is hard—feels fresh and insightful.

These examples of classic prose have a style of humor that does not belong in assessment reports, but they are nevertheless instructive. The three writers have noticed that even qualities that seem unambiguously positive—wit, modesty, and brevity—have hidden dangers, shortcomings, and costs. Assessment professionals, too, see the downsides of certain virtues and the hidden sense in what appear to be self-defeating behaviors. Similar to these masters of classic style, assessment professionals can make messages memorable with surprise, irony, and contrast:

  • Daniel is never comfortable, except when he is worrying. Worry helps him plan. Worry keeps him safe. To ask Daniel to stop worrying is to ask him to invite catastrophe.
  • Art and Lannie love each other so fiercely that 20 years of quarreling could not tear them apart.
  • Although Jackson intimidates other children, he is in some ways more afraid than they are. No one fears the bully more than the bully himself.
  • If Gina were more frightened of germs, she would not wash her hands so often. Her skin, rubbed raw from years of constant scrubbing, no longer protects her from infections.
  • For many years, procrastination has helped Karla be the productive person she is today. Procrastination may have its downsides, but it has been her partner in combating a worse problem: perfectionism. Her motto is “The task expands to fit the time allotted.” Only looming deadlines have had the power to focus her mind and reshuffle her priorities to work efficiently. Recently, however, this strategy has backfired dramatically …

It would strike the wrong tone if the entire report were ironic in this way, but a few memorable sentences might change a person’s life.

Excerpt from pp. 35–37 of Schneider, W. J., Lichtenberger, E. O, Mather, N., & Kaufman, N. L. (2018). Essentials of Assessment Report Writing (2nd ed). Hoboken, NJ: Wiley.

Advertisements
Standard
Cognitive Assessment

Why Do Assessment Reports Exist at All?

Think of the time and effort we could save if we simply did our assessments, gathered the relevant parties, and then had an engaging conversation about our findings. Why not let an automated transcript of the conversation serve as the permanent record of the assessment? Abandon all hope, ye who enter here. Even if the practice were feasible, it fundamentally misunderstands the nature of an assessment report.

What a hammer does for the fist, what pliers do for the grip, what a telescope does for the eye, writing does for the mind. Unaided, the mind can contemplate solutions to complex problems, but attention wanders and memories fade. Writing not only preserves our thoughts but also sharpens our thinking. By sequencing sound on durable paper, we can contemplate the products of our own minds from a higher vantage— and with a steady gaze. Our words, now external objects, can be revised, reshaped, refined, reorganized, and most important, revisited. As Susan Sontag (2000) observed, “what I write is smarter than I am. Because I can rewrite it.”

Think of writing not as a way to transmit a message but as a way to grow and cook a message. Writing is a way to end up thinking something you couldn’t have started out thinking. —Peter Elbow (1998, p. 15)

Excerpt from p. 30 of Schneider, W. J., Lichtenberger, E. O, Mather, N., & Kaufman, N. L. (2018). Essentials of Assessment Report Writing (2nd ed). Hoboken, NJ: Wiley.

Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Dr. Procrustes does not need to see you; he has your test scores.

I rock at the Tower of Hanoi—you could give me a stack of as many discs as you like and I can move the whole stack from one peg to the other without any hesitation and without a single error. I don’t mean to be immodest about it, but it’s true. My performance is like 11.8 standard deviations above the mean, which by my calculations is so rare that if a million people were born every second ever since the Big Bang, there is still only a 2.7% chance that I would have been born by now—I feel very lucky (and honored) to be here.

You would be forgiven for thinking that I had excellent planning ability…but not if you voiced such an opinion out loud, within earshot of my wife, causing her to die of laughter—I would miss her very much. No, it is not by preternatural planning ability that I compete with only the gods in Tower of Hanoi tournaments-in-the-sky. In fact, the first time I tried it, my score was not particularly good. I am not going say what it was but the manual said that I ranked somewhere between the average Darwin Award winner and the person who invented English spelling rules. After giving the test some thought, however, I realized that each movement of the discs is mechanically determined by a simple rule. I will not say what the rule is for fear of compromising the validity of the test for more people. The rule is not so simple that you would figure it out while taking the test for the first time, but it is simple enough that once you learn it, you will be surprised how easy the test becomes.

All kidding aside, it is important for the clinician to be mindful of the process by which a child performs well or poorly on a test. For me, the Tower of Hanoi does not measure planning. For others, it might. Edith Kaplan (1988) was extremely creative in her methods of investigating how people performed on cognitive tests. Kaplan-inspired tools such as the WISC-IV Integrated provide more formal methods of assessing strategy use. However, careful observations and even simply asking children how they approached a task (after the tests have been administered according to standard procedures) is often enlightening and can save time during the follow-up testing phase. For example, I once read about an otherwise low-performing boy who scored very well on the WISC-IV Block Design subtest. When asked how he did so well on it, he said that he had the test at home and that he practiced it often. The clinician doubted this very much but his story turned out to be true! His mother was an employee at a university and saw someone from the Psychology Department throwing outdated WISC-III test kits into the garbage. She was intrigued and took one home for her children to play with.

I once gave the WAIS-III to a woman who responded to the WAIS-III Vocabulary subtest as if it were a free association test. I tried to use standard procedures to encourage her to give definitions to words but the standard prompts (“Tell me more”) just made it worse. Finally, I broke with protocol and said, “These are fabulous answers and I like your creativity. However, I think I did not explain myself very well. If you were to look up this word in the dictionary, what might it say about what the word means?” In the report I noted the break with protocol but I believe that the score she earned was much more reflective of her Lexical Knowledge than would have been the case had I followed procedures more strictly. I do not wish to be misunderstood, however; I never deviate from standard procedures except when I must. Even then, I conduct additional follow-up testing to make sure that the scores are correct.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.

Standard
CHC Theory, Cognitive Assessment, Principles of assessment of aptitude and achievement

Explaining the difference between vision and visual-spatial processing to parents.

Vision is the ability to see something and visual-spatial processing helps you make sense of what you see.

Vision is the ability to see what is there. Visual-spatial processing is the ability to see what is not there, too, in a sense. With good vision you can see what something looks like; with good visual-spatial processing you can imagine what something would look like if you turned it around or if you were standing somewhere else or if something else was covering part of it.

With good vision you can see objects; with good visual-spatial processing you can see how they might fit together.

With good vision you can see a bunch of lines and splotches of colors; with good visual-spatial processing you can see how those lines and splotches of color form meaningful patterns.

This is the ability that sculptors, painters, designers, engineers, and architects need. It comes in pretty handy for the rest of us too.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Aptitudes and Achievement: Definitions, Distinctions, and Difficulties

Achievement typically refers to knowledge and skills that are formally taught in academic settings. However, this definition of achievement can be broadened to include any ability that is valued and taught in a particular cultural setting (e.g., hunting, dancing, or computer programming). Aptitude refers to an individual’s characteristics that indicate the potential to develop a culturally valued ability, given the right circumstances. The difference between aptitudes and achievement at the definitional level is reasonably clear. However, at the measurement level, the distinction becomes rather murky.

Potential, which is latent within a person, is impossible to observe directly. It must be inferred by measuring characteristics that either are typically associated with an ability or are predictive of the future development of the ability. Most of the time, aptitude is assessed by measuring abilities that are considered to be necessary precursors of achievement. For example, children who understand speech have greater aptitude for reading comprehension than do children who do not understand speech. Such precursors may themselves be a form of achievement. For example, it is possible for researchers to consider students’ knowledge of history as an outcome variable that is intrinsically valuable. However, some researchers may measure knowledge of history as a predictor of being able to construct a well-reasoned essay on politics. Thus, aptitude and achievement tests are not distinguished by their content but by how they are used. If we use a test to measure current mastery of a culturally valued ability, it is an achievement test. If we use a test to explain or forecast mastery of a culturally valued ability, it is an aptitude test.

IQ tests are primarily used as aptitude tests. However, an inspection of the contents of most IQ tests reveals that many test items could be repurposed as items in an achievement test (e.g., vocabulary, general knowledge, and mental arithmetic items). Sometimes the normal roles of reading tests and IQ tests are reversed, such as when neuropsychologists estimate loss of function following a brain injury by comparing current IQ to performance on a word-reading test.

A simple method to distinguish between aptitude and achievement is to ask, “Do I care about whether a child has the ability measured by this test because it is inherently valuable or because it is associated with some other ability (the one that I actually care about)?” Most people want children to be able to comprehend what they read. Thus, reading tests are typically achievement tests. Most people are not particularly concerned about how well children can reorder numbers and letters in their heads. Thus, the WISC-IV Number-Letter Sequencing subtest is typically used as an aptitude test, presumably because the ability it measures is a necessary component of being able to master algebra, program computers, follow the chain of logic presented by debating candidates, and other skills that people in our culture care about.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.

Standard
Cognitive Assessment, Compositator, My Software & Spreadsheets, Psychometrics

The Compositator: New Software for the Woodcock-Johnson III

I am very excited to announce that the Woodcock-Muñoz Foundation’s WMF Press has published my FREE software program with an admittedly silly name:

The Compositator!

Its purpose is anything but silly, though. The feature that gives it its name is that you can create custom composite scores from any test in three WJ III batteries (Cognitive, Achievement, and Diagnostic Supplement).

For example, Picture Vocabulary and Academic Knowledge from the WJ III Achievement battery can be combined with Verbal Comprehension and General Information from the WJ III Cognitive battery to form a more reliable and more comprehensive measure of crystallized intelligence.

The Compositator is a supplement to the scoring software for the WJ III (either the WJ III Compuscore and Profiles Program or the WIIIP). It will not run on a machine unless one of these programs is installed.

The Compositator not only allows you to combine subtests in a psychometrically sound manner, it allows you to combine statistical information in ways not previously possible. For example, it allows you to create a comprehensive model of reading ability, specifying the relationships among all the various cognitive and academic abilities in the WJ III. From there, you can do things like estimate how much a person’s reading comprehension would improve if auditory processing were remediated. If auditory processing is remediated, what are the simultaneous effects on reading decoding, reading fluency, and reading comprehension?

There is much more that the program can do. I’ve been working on this program for the past 3 years and have been thinking about it since 1999 when I was in graduate school. There is a comprehensive manual and video tutorials to get you started.

Kevin McGrew has been the earliest and most enthusiastic supporter of the program. His generous description of the program is here.

I hope that you find the program useful. I would love to hear from you, especially if you have ideas for improving the program.

Standard