Cognitive Assessment

John Willis’ Comments on Reports newsletter makes me happy.

Whenever I find that John Willis has posted a new edition of his Comments on Reports newsletter, I read it greedily and gleefully. Each newsletter is filled with sharp-witted observations, apt quotations, and practical wisdom about writing better psychological evaluation reports.

Recent gems:

From #251

The first caveat of writing reports is that readers will strive mightily to attach significant meaning to anything we write in the report. The second caveat is that readers will focus particularly on statements and numbers that are unimportant, potentially misleading, or — whenever possible — both. This is the voice of bitter experience.

Also from #251

Planning is so important that people are beginning to indulge in “preplanning,” which I suppose is better than “postplanning” after the fact. One activity we often do not plan is evaluations.

From #207:

I still recall one principal telling the entire team that, if he could not trust the spelling in my report, he could not trust any of the information in it. This happened recently (about 1975), so it is fresh in my mind. Names of tests are important to spell correctly. Alan and Nadeen Kaufman spell their last name with a single f and only one n. David Wechsler spelled his name as shown, never as Weschler. The American version of the Binet-Simon scale was developed at Stanford University, not Standford. I have to keep looking it up, but it is Differential Ability Scales even though it is a scale for several abilities. Richard Woodcock may, for all I know, have attended the concert, but his name is not Woodstock.

Cognitive Assessment

MS Word Trick: Make your headings stay on the same page as the paragraph below

When I write psychological evaluation reports, I start with a template that has headings for the various sections. Until now, I always had to check the document before printing to make sure that no headings were alone on the last line of the page, with its accompanying paragraph on the next page. It did not take much time to fix the problems, but it was a pain to re-paginate the report if I made future edits. Its main cost was a bit of worry each time I finished a report.

All these years it never occurred to me to ask whether Microsoft engineers had anticipated this problem!

In Microsoft Word, there is an option to keep a paragraph on the same page as the next paragraph. I use Word 2010 for Windows so your experience might be slightly different. I select the heading, and click the format button in the Paragraph section.


Then click the Line and Page Breaks tab.


Then check the Keep with next box.


Now right-click Heading 1 on the Styles portion of the Home tab on the ribbon. Select Update Heading 1 to Match Selection.


Now everything you have marked as a Level 1 heading will stay with its accompanying paragraph. You can repeat the process for Level 2 and Level 3 headings, if needed.

I have now updated my template so that the headings behave properly.

A more thorough treatment of page breaks and other pagination tricks can be found here.

Cognitive Assessment, Principles of assessment of aptitude and achievement

Advice for Psychological Evaluation Reports: Write about people, not tests

At its best, the end product of a psychological assessment is that a child’s life is made better because something useful and true is communicated to people who can use that information to make better decisions. How is this information best communicated? I believe that it is by the skillful retelling of the story of the child’s struggle to cope with the difficulties that led to the testing referral.

Not only are humans storytelling creatures, we are also storylistening creatures. We are moved by drama, cleansed by tragedy, unified by cultural myths, and inspired by tales of heroic struggle. Most importantly, through stories we remember enormous amounts of information. Tabulated test results are inert until the evaluator weaves them together into a coherent narrative explanation that helps children and their caregivers construct a richer, more nuanced, and more organized understanding of the problem. Compare the following assessment results.

Explanation 1

On a test in which Judy had to repeat words and segment them into individual phonemes, Judy earned a standard score of 78, which is in the Borderline Range. Only 7 percent of children performed at Judy’s level or lower on this test. This test is a good predictor of the ability to read single words isolated from contextual cues. On a test that measures this ability, Judy scored an 83, which is in the 13th percentile or in the Low Average Range. Reading single words is necessary to understand sentences and paragraphs. On a test that requires the evaluee to read a paragraph and then answer questions that test the evaluee’s understanding of the text, Judy scored an 84, which is in the Low Average Range. This is in the 14th percentile. An 84 in Reading Comprehension is 24 points lower than her Full Scale IQ of 110 (75th percentile, High Average Range). This is significant at the .01 level and only 3% of children in Judy’s age range have a 24-point discrepancy or larger between Reading Comprehension and Full Scale IQ. Thus, Judy meets criteria for Reading Disorder. More specifically, Judy appears to have phonological dyslexia. Phonological dyslexia refers to difficulties in reading single words because of the inability to hear individual phonemes distinctly. This difficulty in decoding single words makes reading narrative text difficult because the reading process is slow and error prone. Intensive remediation in phonics skills followed by reading fluency training is recommended.

Explanation 2

For most 12-year-olds as bright as Judy is, reading is a skill that is so well developed and automatic that it becomes a pleasure. For Judy, however, reading is chore. It takes sustained mental effort for her to read each word one by one. It then requires further concentration for her to go back and figure out what these individual words mean when they are strung together in complete sentences, paragraphs, and stories. It is a slow, laborious process that is often unpleasant for Judy.

Why did Judy, a bright and delightfully creative girl, fail to learn to read fluently? It is impossible to know with certainty. However, the problem that most likely first caused Judy to fall behind her peers is that she does not hear speech sounds as clearly as most people do. It is as if she needs glasses for her ears: The sounds are blurry. For example, although she can hear the whole word cat perfectly well, she might not recognize as easily as most children do that the word consists of three distinct sounds: |k|, |a|, and |t|. For this reason, she has to work harder to remember that these three sounds correspond to three separate letters: |k|=C, |a|=A, and |t|=T. With simple words like cat, Judy’s natural ability is more than sufficient to help her remember what the letters mean. However, learning to recognize and remember larger words, uncommonly used words, or words with irregular spellings is much more difficult for Judy than it is for most children.

Many children with the same difficulty in hearing speech sounds distinctly eventually learn to work around the problem and come to read reasonably well. However, Judy is a perceptive and sensitive girl. These traits are typically helpful but, unfortunately, they allowed her to be acutely aware, from very early on, that she did not read as well as her classmates. She clearly remembers that her friends and classmates giggled when she made reading errors that were, to them, inexplicable. For example, for a while she earned the nickname “Tornado Girl” when she was reading aloud in class and misread “volcano” as “tornado.” She came to dread reading aloud in class and felt growing levels of shame even when she read silently to herself. She began to avoid reading at all costs. She did not read for pleasure, even when the texts were easy enough for her to read because she felt, in her words, “dumb, dumb, and dumb.” Over the next several years, she fell further behind her peers. By avoiding reading, she never developed the smooth, automatic reading skills that are necessary to make reading a pleasurable and self-sustaining activity.

Although Judy’s ability to hear speech sounds distinctly is still low compared to her 12-year old peers, this weakness is not what is holding her back now. Indeed, her current ability to hear speech sounds distinctly is actually better than that of most 6 and 7 year-olds, most of whom learn to read without difficulty. With extra help, Judy can learn to decode words phonetically. However, in order for her to develop her reading fluency and reading comprehension skills to the level that she is capable, she will need to engage in sustained practice reading texts that are both interesting for Judy and are at the correct level of difficulty. She is likely to be willing to read only if she is helped to manage the sense of shame she feels when she attempts to read a book. This may require the collaboration of a reading specialist and a behavior specialist with expertise in the cognitive-behavioral treatment of anxiety-related problems.

Comparing Explanations

I am reasonably confident that most readers would find the second explanation to be much more useful than the first. The second explanation is not better than the first simply because it is more detailed. Explanation 1 could have been supplemented with more details if I had taken the time to fill it with even more information about test results. The second explanation is not better simply because it avoids statistical jargon that is difficult for parents and teachers to understand. Even if the jargon were removed from the first explanation and inserted into the second, the second explanation would still be better.

The second explanation is better because it is more about Judy than about her performance on tests. The narrative explanation of how her reading problem developed and how it was maintained is better because it leads to better treatment recommendations. More importantly, it leads to recommendations that will be understood and remembered by Judy’s parents and teachers. One of the problems with the first explanation is, ironically, that it is not difficult to understand if it is properly explained. Most parents and teachers will nod their heads as they hear it. However, they are likely to forget the explanation as soon as they leave the room. Most of us are not accustomed to thinking about people in terms of sets of continuous variables. Without a narrative structure to hold them together, assessment details slip through the cracks of our memories quickly. It is unfortunate that a forgotten explanation, no matter how accurate, no matter how brilliant, is as helpful as no explanation at all.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Cognitive Assessment

Advice for psychological evaluation reports: Make every sentence worth reading

I have made this [letter] longer, because I have not had the time to make it shorter.

– Blaise Pascal, “Lettres provinciales”, letter 16, 1657

The secret of being a bore is to tell everything.

 – Voltaire, “Sixième discours: sur la nature de l’homme,” Sept Discours en Vers sur l’Homme (1738)

A little inaccuracy sometimes saves tons of explanation.

– Saki, The Square Egg, 1924

When we get together, we psychologists often lament that we spend a lot of time writing psychological evaluation reports that no one reads, at least not in full. I have come to believe that this is mostly our fault. Much of what we write in our reports is boring (e.g., describing each test), canned (e.g., describing each test), confusing (e.g., describing each test), and irrelevant (e.g., describing each test). It would be an understatement to say that I am not the first to voice such opinions.

If we want people to read our reports carefully, we must write reports that are worth reading all the way through. If you insist on including boring, canned, confusing, and irrelevant content, consider tucking it away in an appendix.

Explain what you know, not how you know

As students we are rewarded for “showing our work.” We are encouraged to state a position and then provide data and arguments that justify our claims. The resulting literary form (the student position paper) aligns well with the objectives of the course but it rarely aligns with the purpose of psychological evaluation reports. Reports should focus on communicating to the reader something that is useful and true about an individual. Presenting observations and data and then walking the reader through the steps in our diagnostic reasoning is rarely helpful to non-specialists. Most readers need the results of our assessment (our interpretations and suggestions), not an account of our process.

My old reports are embarrassing

My earliest reports contained mini-tutorials on operant conditioning, attachment theory, psychometrics, and specific details about the tests I administered (e.g., the structure and format of WISC subtests). I naively thought that this information would be interesting and helpful to people. In retrospect, I think that writing these explanations may have helped me more than the reader. Bits and pieces of my newly acquired expertise were not fully integrated in my mind and writing everything out probably consolidated my understanding. Whatever the benefit for me, I cannot remember a time in which the inclusion of such details proved crucial to selecting the right interventions and I can remember times in which they were confusing or alienating to parents.

Bad habits I let go

Over the years, I began a long, slow process of letting go of the report templates I was given in graduate school and unlearning bad habits of my own invention.

  • I stopped talking about the names, content, and structure of tests and measures and focused on the constructs they measured. I stopped organizing my reports by test batteries and instead used a theoretical organization. If I learn something important about the evaluee’s personality during the academic achievement testing, I weave that information into the personality section (and I rarely explain how such information was obtained).
  • I stopped talking about numbers (e.g., standard scores and percentiles). Instead I describe what a person can or cannot do and why it matters. I still make extensive use of numbers in the initial stages of case conceptualization but at some point they fade into the background of the overall narrative.
  • I stopped talking about the details of my observations and simply stated the overall conclusions from my observations (combined with other data).
  • I stopped including information that was true but uninformative (e.g., the teen is left-handed but plays guitar right-handed). My “Background Information” section became the “Relevant Background Information” section. I often re-read reports after I am finished and try to remove details that clutter the overall message of the report. Often this means bucking tradition. For example, I was trained to ask about a great many details, including allergies. If a child’s allergies are so severe that they interfere with the ability to concentrate in school, they are worth reporting. However, in most cases a person’s mild allergies are not worth reporting.
  • I stopped merely reporting information (e.g., the scores may be underestimates of ability because sometimes the evaluee appeared to give up when frustrated by the hard items on a few tests) and instead focused on contextualizing and interpreting the information so that the implications are clear (e.g., outside of the testing in which situations and on which tasks is the evaluee likely to underperform and by how much?).
  • I stopped explaining why certain scores might be misleading. For example, if the WAIS-IV Arithmetic was high but other measures of working memory capacity were low (after repeated follow-up testing), I no longer explain that follow-up testing was needed, nor that at some point in the assessment process I was unsure about the person’s working memory abilities. I just explain what working memory is and why the person’s weakness matters. I do not feel the need, in most cases, to explain that the WAIS-IV Working Memory Index is inflated because of a high score on Arithmetic.
  • I stopped explaining why scores that measure the same thing are inconsistent. Non-professionals won’t understand the explanation and professionals don’t need it. If the inconsistency reveals something important (e.g., fluctuating attention), I just state what that something is and why it matters.
  • I stopped treating questionnaire data as more important and precise than interview data. I came to treat all questionnaires, no matter how long, as screeners. In most cases, I do not treat questionnaire data as a “test” that provides information that is independent of what the person said in the interview. Interview data and questionnaire data come from the same source. If the questionnaire data and the interview data are inconsistent, I interview the person until the inconsistency is resolved.
  • I stopped sourcing my data every time I made a statement. For example, I stopped writing, “On the MMPI-2 and in the interview, X reported high levels of depression. In an interview, X’s husband also reported that X had high levels of depression.” It does not usually matter where or how I obtained the information about the depression. What matters is whether the information is accurate and useful. In the narrative, I only report my final opinion of what is going on based on the totality of evidence, not the bits and pieces of information I collected along the way.
  • I stopped sourcing interview data when I was quite sure that it was correct. For example, I no longer write: “Susie’s mother reported that Susie’s reading difficulties were first noticed when she was in the first grade.” If I have every reason to believe that this is true, I simply say, “Susie’s reading difficulties were first noticed when she was in the first grade.” However, if I am uncertain that something Susie’s mother said is true or if I am reporting Susie’s mother’s opinion, I attribute the statement to her.
Cognitive Assessment, Principles of assessment of aptitude and achievement

Allowing yourself to be wrong allows you to be right…eventually

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

– Stephen Hawking

It is wise to remember that you are one of those who can be fooled some of the time.

– Laurence J. Peter

We human beings are so good at pattern recognition that sometimes we find patterns that are not even there. I have never seen a cognitive profile, no matter how unusual and outlandish, that did not inspire a vivid interpretation that explained EVERYTHING about a child. In fact, the more outlandish, the better. On a few occasions, some of the anomalous scores that inspired the vivid interpretations turned out to be anomalous due to scoring errors. In these humbling experiences, I have learned something important. I noticed that in those cases, my interpretations seemed just as plausible to me as any other. If anything, I was more engaged with them because they were so interesting. Of course, there is nothing wrong with making sense of data and there is nothing wrong with doing so with a little creativity. Let your imagination soar! The danger is in taking yourself too seriously.

The scientific method is a system that saves us from our tendencies not to ask the hard questions after we have convinced ourselves of something. Put succinctly, the scientific method consists of not trusting any explanation until it survives your best efforts to kill it. There is much to be gained in reserving some time to imagine all the ways in which your interpretation might be wrong. The price of freedom is responsibility. The price of divergent thinking is prudence. It is better to be right in the end than to be right right now.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Cognitive Assessment, Principles of assessment of aptitude and achievement

Advice for psychological evaluation reports: Render abstruse jargon in the vernacular

PRIMUS DOCTOR: Most learned bachelor whom I esteem and honor, I would like to ask you the cause and reason why opium makes one sleep.

BACHELIERUS: ….The reason is that in opium resides a dormitive virtue, of which it is the nature to stupefy the senses.

—from Molière’s Le Malade Imaginaire (1673)

A man thinks that by mouthing hard words he understands hard things.

—Herman Melville

The veil of ignorance can be weaved of many threads, but the one spun with the jangly jargon of a privileged profession produces a diaphanous fabric of alluring luster and bewitching beauty. Such jargon not only impresses outsiders but comforts them with what Brian Eno called the last illusion: the belief that someone out there knows what is going on. Too often, it is a two-way illusion. Like Molière’s medical student, we psychologists fail to grasp that our (invariably Latinate) technical terms typically do not actually explain anything. There is nothing wrong with technical terms, per se; indeed, it would be hard for professionals to function without them. However, with them, it is easy to fall into logical traps and never notice. For example, saying that a child does not read well because she has dyslexia is not an explanation. It is almost a tautology, unless the time is taken to specify which precursors to reading are absent, and thus, make dyslexia an informative label.

An additional and not insubstantial benefit of using ordinary language is that you are more likely to be understood. This is not to say that your communication should be dumbed down to the point that the point is lost. Rather, as allegedly advised by Albert Einstein, “Make everything as simple as possible, but not simpler.”

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Cognitive Assessment, My Software & Spreadsheets, Tutorial, Video

TableMaker for Psychological Evaluation Reports


I am proud to announce the release of my new computer program, TableMaker for Psychological Evaluation Reports. It is designed to help providers of psychological assessments organize and present test data in a simple, efficient, and theoretically informed manner. You enter an evaluee’s test scores in an order that is convenient to you and theoretically organized tables are generated in MS Word like so:

This video tutorial explains how to use the program.

TableMaker is free. For now, unfortunately, it runs on Windows only. Mac users can still use the Excel spreadsheet I made several years ago, which can do much of what the TableMaker does but is less convenient and less flexible.