I rock at the Tower of Hanoi—you could give me a stack of as many discs as you like and I can move the whole stack from one peg to the other without any hesitation and without a single error. I don’t mean to be immodest about it, but it’s true. My performance is like 11.8 standard deviations above the mean, which by my calculations is so rare that if a million people were born every second ever since the Big Bang, there is still only a 2.7% chance that I would have been born by now—I feel very lucky (and honored) to be here.
You would be forgiven for thinking that I had excellent planning ability…but not if you voiced such an opinion out loud, within earshot of my wife, causing her to die of laughter—I would miss her very much. No, it is not by preternatural planning ability that I compete with only the gods in Tower of Hanoi tournaments-in-the-sky. In fact, the first time I tried it, my score was not particularly good. I am not going say what it was but the manual said that I ranked somewhere between the average Darwin Award winner and the person who invented English spelling rules. After giving the test some thought, however, I realized that each movement of the discs is mechanically determined by a simple rule. I will not say what the rule is for fear of compromising the validity of the test for more people. The rule is not so simple that you would figure it out while taking the test for the first time, but it is simple enough that once you learn it, you will be surprised how easy the test becomes.
All kidding aside, it is important for the clinician to be mindful of the process by which a child performs well or poorly on a test. For me, the Tower of Hanoi does not measure planning. For others, it might. Edith Kaplan (1988) was extremely creative in her methods of investigating how people performed on cognitive tests. Kaplan-inspired tools such as the WISC-IV Integrated provide more formal methods of assessing strategy use. However, careful observations and even simply asking children how they approached a task (after the tests have been administered according to standard procedures) is often enlightening and can save time during the follow-up testing phase. For example, I once read about an otherwise low-performing boy who scored very well on the WISC-IV Block Design subtest. When asked how he did so well on it, he said that he had the test at home and that he practiced it often. The clinician doubted this very much but his story turned out to be true! His mother was an employee at a university and saw someone from the Psychology Department throwing outdated WISC-III test kits into the garbage. She was intrigued and took one home for her children to play with.
I once gave the WAIS-III to a woman who responded to the WAIS-III Vocabulary subtest as if it were a free association test. I tried to use standard procedures to encourage her to give definitions to words but the standard prompts (“Tell me more”) just made it worse. Finally, I broke with protocol and said, “These are fabulous answers and I like your creativity. However, I think I did not explain myself very well. If you were to look up this word in the dictionary, what might it say about what the word means?” In the report I noted the break with protocol but I believe that the score she earned was much more reflective of her Lexical Knowledge than would have been the case had I followed procedures more strictly. I do not wish to be misunderstood, however; I never deviate from standard procedures except when I must. Even then, I conduct additional follow-up testing to make sure that the scores are correct.
This post is an excerpt from:
Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.