Cognitive Assessment

Habitual Hedging Is Unnecessary, Unattractive, and Annoying

To escape criticism—do nothing, say nothing, be nothing.

Elbert Green Hubbard (1909, p. 38)

If you want to be a stickler about it, you can remind people in every statement you make of the deep-seated uncertainty of mortal existence. However, in everyday communication we only introduce doubt when there is reasonable doubt. If you ask a stranger for the time, and he tells you that it is 3:15, you thank him and move along. If he says, “It might be 3:15,” you still thank him, but you look around for someone else with a watch.

In much academic writing, clarity runs a poor second to invulnerability.

Richard Hugo (1992, p. 11)

Expressions of doubt exist for a reason. Suppose someone tells you that Shelby is angry with you. You must decide what to do with that information. Now suppose that someone tells you that Shelby might be angry with you. This information might lead to a different course of action. If the person is quite sure about Shelby’s anger but added “might” because of her philosophical stance that everything is uncertain, she is correct in what she said but incorrect in what she communicated. We rely on social conventions to communicate much that is unstated. If the public is not accustomed to the ways in which we introduce doubt into our sentences, we are miscommunicating. Suppose you write,

Her mother reported that Julia has a “severe peanut allergy.”

You might think the subtext of this sentence is “See how careful I am? I am telling you where I got all my information. Also, I’m not an allergist so it is not my place to say how severe the allergy is. Therefore, I am using Julia’s mother’s words instead of my own.” Many readers will understand that this is all we mean. However, to some readers, we might as well have written,

The “woman” who claims to be Julia’s mother asserted, without evidence, that Julia (if that is indeed her name) has a so-called peanut allergy, which, for reasons unspecified, was described as “severe.”

Why do we write reports with hyper-precise language? We want to be right … and to be respectful. We also want not to be wrong, not to be challenged, and, if we are wrong, not to be responsible. You never know when someone might sue you for saying that an allergy is severe when in fact it is only moderately severe. Steven Pinker (2014) observed,

Writers acquire the hedge habit to conform to the bureaucratic imperative that’s abbreviated as CYA, which I’ll spell out as Cover Your Anatomy. They hope it will get them off the hook, or at least allow them to plead guilty to a lesser charge, should a critic ever try to prove them wrong. …A classic writer counts on the common sense and ordinary charity of his readers, just as in everyday conversation we know when a speaker means “in general” or “all else being equal.” If someone tells you that Liz wants to move out of Seattle because it’s a rainy city, you don’t interpret him as claiming that it rains there twenty-four hours a day seven days a week just because he didn’t qualify his statement with relatively rainy or somewhat rainy. … An adversary who is unscrupulous enough to give the least charitable reading to an unhedged statement will find an opening to attack the writer in a thicket of hedged ones anyway. … It’s not that good writers never hedge their claims. It’s that their hedging is a choice, not a tic. (pp. 44–45)

Let’s start with an excessively hedged statement and then explore some alternatives:

Julia’s mother’s CBCL Externalizing score of 78 suggests that Julia may engage in antisocial behavior more often than her peers.

Suggests? May? These words were no doubt intended as a sign of respect for the uncertainty inherent in the assessment process, but they also reveal an assessment in limbo and only half completed. If the evaluator has no other information about Julia, then, yes, the CBCL Externalizing score does no more than suggest the presence of problems Julia may have. But to stop there means that the evaluator does not understand what rating scales are for.

Rating scales are tools for collecting information efficiently and can focus our investigation on areas of particular concern. However, nothing rating scales can tell us is trustworthy enough to mention in a report—unless it has been corroborated. Once her parents, her teachers, and Julia herself have told us that she has a long history of truancy, shoplifting, and fistfights, the score is beside the point. We base our interpretation on the totality of evidence, not on a particular score. A corroborated score might still tell us something about the rarity of the problem, but to insist on words like suggest bespeaks a perversely cautious epistemology.

The information, interpretations, and conclusions in a classically written report have been thoroughly vetted by the examiner and are verifiable—at least in theory—by anyone. For this reason, they are stated simply, directly, and without hedging. Opinions, predictions, and preferences are clearly labeled as such when necessary, but without compulsive hand-wringing. In this way, the writer shows respect for the reader’s competence in recognizing an opinion for what it is. 

Remove Unnecessary Qualifications and Excessive Sourcing

Statement Reason for Edit
If Julia’s mother’s recollection is accurate, Julia was born 6 weeks premature. If anyone is going to be accurate about such a matter, it is going to be Julia’s mother.
According to Julia’s teacher, he gives her extra incentives to stay focused on her seatwork. There is no reason to doubt Julia’s teacher’s words here. The original wording suggests that Julia’s teacher might have lied, or at best, is confused.
The BASC-3 Self-Report of Personality indicates that Julia possibly has high levels of anxiety. Rating scales do not have enough authority to stand on their own. Your judgment cannot be outsourced to them. Once the interpretation has been properly confirmed, the reference to the rating scale as a source is superfluous.
Exposure therapy may help Julia manage her debilitating fear of dogs, but it is impossible to know for certain. I recommend exposure therapy to help Julia manage her debilitating fear of dogs. Almost anything may help Julia. What is your recommendation? There is no need to undermine confidence in your suggestions. It is widely understood that a recommendation is not a guarantee. If you are not ready to make a suggestion you can stand by, your assessment is not yet finished.

At first, the classic style seems overly bold, as if the writers present their opinions as immutable laws. There is legitimate cause for concern here, but the worry is overstated. It is easy to spot the difference between the clear, disinterested pronouncements of classic prose and the bloviation and bluster of pompous windbags. If there is anything that we social creatures are good at, it is recognizing self-promotion, especially when the self-promoter’s interests do not align with our own. Furthermore, there is no set of writing guidelines in the world that will stop pompous windbags from engaging in pompous windbaggery. Therefore, we might as well design our rules of decorum for sensible people of good will.

When there are lingering doubts about the accuracy of a statement in a report, you should gather more evidence until you can say something more definite. No one benefits from words parsed so carefully they are watered down to meaninglessness with mushy maybes, could be sometimes, and possibly some days. These doubt-inducing words are indispensable tools, to be sure, but they are to be used with skill and judgment instead of mechanically inserted in every statement.

Writing in the classic style gives the writer certain license to be clear and direct, but no license for high-handedness. This freedom to be direct in writing is paid for by scrupulous scientific modesty and soul-searching doubt during the assessment phase. Assessment is not a parlor trick in which we guess from minimal information all of the person’s deepest secrets. Rather, we work collaboratively with the person and then verify with all relevant parties whether a possible interpretation is true. Thus, a properly vetted interpretation will come as no surprise when it appears in a report. If despite best efforts, the report is found to have an interpretive error, the report can be amended.

Obviously, hedging is warranted if you expect the report to be included in a lawsuit. If you wish to adopt the classic style, eliminating unnecessary qualification and hedging, but you still want to play it safe, you can include in your report a blanket disclaimer in which you acknowledge the possibility of error and that your observations, conclusions, and recommendations are simply your best guesses rather than claims of absolute certainty.

Excerpt from pp. 37–40 of Schneider, W. J., Lichtenberger, E. O, Mather, N., & Kaufman, N. L. (2018). Essentials of Assessment Report Writing (2nd ed). Hoboken, NJ: Wiley.

Standard
Cognitive Assessment

John Willis’ Comments on Reports newsletter makes me happy.

Whenever I find that John Willis has posted a new edition of his Comments on Reports newsletter, I read it greedily and gleefully. Each newsletter is filled with sharp-witted observations, apt quotations, and practical wisdom about writing better psychological evaluation reports.

Recent gems:

From #251

The first caveat of writing reports is that readers will strive mightily to attach significant meaning to anything we write in the report. The second caveat is that readers will focus particularly on statements and numbers that are unimportant, potentially misleading, or — whenever possible — both. This is the voice of bitter experience.

Also from #251

Planning is so important that people are beginning to indulge in “preplanning,” which I suppose is better than “postplanning” after the fact. One activity we often do not plan is evaluations.

From #207:

I still recall one principal telling the entire team that, if he could not trust the spelling in my report, he could not trust any of the information in it. This happened recently (about 1975), so it is fresh in my mind. Names of tests are important to spell correctly. Alan and Nadeen Kaufman spell their last name with a single f and only one n. David Wechsler spelled his name as shown, never as Weschler. The American version of the Binet-Simon scale was developed at Stanford University, not Standford. I have to keep looking it up, but it is Differential Ability Scales even though it is a scale for several abilities. Richard Woodcock may, for all I know, have attended the concert, but his name is not Woodstock.

Standard
Cognitive Assessment

MS Word Trick: Make your headings stay on the same page as the paragraph below

When I write psychological evaluation reports, I start with a template that has headings for the various sections. Until now, I always had to check the document before printing to make sure that no headings were alone on the last line of the page, with its accompanying paragraph on the next page. It did not take much time to fix the problems, but it was a pain to re-paginate the report if I made future edits. Its main cost was a bit of worry each time I finished a report.

All these years it never occurred to me to ask whether Microsoft engineers had anticipated this problem!

In Microsoft Word, there is an option to keep a paragraph on the same page as the next paragraph. I use Word 2010 for Windows so your experience might be slightly different. I select the heading, and click the format button in the Paragraph section.

HeadingTrick1

Then click the Line and Page Breaks tab.

HeadingTrick2

Then check the Keep with next box.

HeadingTrick3

Now right-click Heading 1 on the Styles portion of the Home tab on the ribbon. Select Update Heading 1 to Match Selection.

HeadingTrick4

Now everything you have marked as a Level 1 heading will stay with its accompanying paragraph. You can repeat the process for Level 2 and Level 3 headings, if needed.

I have now updated my template so that the headings behave properly.

A more thorough treatment of page breaks and other pagination tricks can be found here.

Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Advice for Psychological Evaluation Reports: Write about people, not tests

At its best, the end product of a psychological assessment is that a child’s life is made better because something useful and true is communicated to people who can use that information to make better decisions. How is this information best communicated? I believe that it is by the skillful retelling of the story of the child’s struggle to cope with the difficulties that led to the testing referral.

Not only are humans storytelling creatures, we are also storylistening creatures. We are moved by drama, cleansed by tragedy, unified by cultural myths, and inspired by tales of heroic struggle. Most importantly, through stories we remember enormous amounts of information. Tabulated test results are inert until the evaluator weaves them together into a coherent narrative explanation that helps children and their caregivers construct a richer, more nuanced, and more organized understanding of the problem. Compare the following assessment results.

Explanation 1

On a test in which Judy had to repeat words and segment them into individual phonemes, Judy earned a standard score of 78, which is in the Borderline Range. Only 7 percent of children performed at Judy’s level or lower on this test. This test is a good predictor of the ability to read single words isolated from contextual cues. On a test that measures this ability, Judy scored an 83, which is in the 13th percentile or in the Low Average Range. Reading single words is necessary to understand sentences and paragraphs. On a test that requires the evaluee to read a paragraph and then answer questions that test the evaluee’s understanding of the text, Judy scored an 84, which is in the Low Average Range. This is in the 14th percentile. An 84 in Reading Comprehension is 24 points lower than her Full Scale IQ of 110 (75th percentile, High Average Range). This is significant at the .01 level and only 3% of children in Judy’s age range have a 24-point discrepancy or larger between Reading Comprehension and Full Scale IQ. Thus, Judy meets criteria for Reading Disorder. More specifically, Judy appears to have phonological dyslexia. Phonological dyslexia refers to difficulties in reading single words because of the inability to hear individual phonemes distinctly. This difficulty in decoding single words makes reading narrative text difficult because the reading process is slow and error prone. Intensive remediation in phonics skills followed by reading fluency training is recommended.

Explanation 2

For most 12-year-olds as bright as Judy is, reading is a skill that is so well developed and automatic that it becomes a pleasure. For Judy, however, reading is chore. It takes sustained mental effort for her to read each word one by one. It then requires further concentration for her to go back and figure out what these individual words mean when they are strung together in complete sentences, paragraphs, and stories. It is a slow, laborious process that is often unpleasant for Judy.

Why did Judy, a bright and delightfully creative girl, fail to learn to read fluently? It is impossible to know with certainty. However, the problem that most likely first caused Judy to fall behind her peers is that she does not hear speech sounds as clearly as most people do. It is as if she needs glasses for her ears: The sounds are blurry. For example, although she can hear the whole word cat perfectly well, she might not recognize as easily as most children do that the word consists of three distinct sounds: |k|, |a|, and |t|. For this reason, she has to work harder to remember that these three sounds correspond to three separate letters: |k|=C, |a|=A, and |t|=T. With simple words like cat, Judy’s natural ability is more than sufficient to help her remember what the letters mean. However, learning to recognize and remember larger words, uncommonly used words, or words with irregular spellings is much more difficult for Judy than it is for most children.

Many children with the same difficulty in hearing speech sounds distinctly eventually learn to work around the problem and come to read reasonably well. However, Judy is a perceptive and sensitive girl. These traits are typically helpful but, unfortunately, they allowed her to be acutely aware, from very early on, that she did not read as well as her classmates. She clearly remembers that her friends and classmates giggled when she made reading errors that were, to them, inexplicable. For example, for a while she earned the nickname “Tornado Girl” when she was reading aloud in class and misread “volcano” as “tornado.” She came to dread reading aloud in class and felt growing levels of shame even when she read silently to herself. She began to avoid reading at all costs. She did not read for pleasure, even when the texts were easy enough for her to read because she felt, in her words, “dumb, dumb, and dumb.” Over the next several years, she fell further behind her peers. By avoiding reading, she never developed the smooth, automatic reading skills that are necessary to make reading a pleasurable and self-sustaining activity.

Although Judy’s ability to hear speech sounds distinctly is still low compared to her 12-year old peers, this weakness is not what is holding her back now. Indeed, her current ability to hear speech sounds distinctly is actually better than that of most 6 and 7 year-olds, most of whom learn to read without difficulty. With extra help, Judy can learn to decode words phonetically. However, in order for her to develop her reading fluency and reading comprehension skills to the level that she is capable, she will need to engage in sustained practice reading texts that are both interesting for Judy and are at the correct level of difficulty. She is likely to be willing to read only if she is helped to manage the sense of shame she feels when she attempts to read a book. This may require the collaboration of a reading specialist and a behavior specialist with expertise in the cognitive-behavioral treatment of anxiety-related problems.

Comparing Explanations

I am reasonably confident that most readers would find the second explanation to be much more useful than the first. The second explanation is not better than the first simply because it is more detailed. Explanation 1 could have been supplemented with more details if I had taken the time to fill it with even more information about test results. The second explanation is not better simply because it avoids statistical jargon that is difficult for parents and teachers to understand. Even if the jargon were removed from the first explanation and inserted into the second, the second explanation would still be better.

The second explanation is better because it is more about Judy than about her performance on tests. The narrative explanation of how her reading problem developed and how it was maintained is better because it leads to better treatment recommendations. More importantly, it leads to recommendations that will be understood and remembered by Judy’s parents and teachers. One of the problems with the first explanation is, ironically, that it is not difficult to understand if it is properly explained. Most parents and teachers will nod their heads as they hear it. However, they are likely to forget the explanation as soon as they leave the room. Most of us are not accustomed to thinking about people in terms of sets of continuous variables. Without a narrative structure to hold them together, assessment details slip through the cracks of our memories quickly. It is unfortunate that a forgotten explanation, no matter how accurate, no matter how brilliant, is as helpful as no explanation at all.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Standard
Cognitive Assessment

Advice for psychological evaluation reports: Make every sentence worth reading

I have made this [letter] longer, because I have not had the time to make it shorter.

– Blaise Pascal, “Lettres provinciales”, letter 16, 1657

The secret of being a bore is to tell everything.

 – Voltaire, “Sixième discours: sur la nature de l’homme,” Sept Discours en Vers sur l’Homme (1738)

A little inaccuracy sometimes saves tons of explanation.

– Saki, The Square Egg, 1924

When we get together, we psychologists often lament that we spend a lot of time writing psychological evaluation reports that no one reads, at least not in full. I have come to believe that this is mostly our fault. Much of what we write in our reports is boring (e.g., describing each test), canned (e.g., describing each test), confusing (e.g., describing each test), and irrelevant (e.g., describing each test). It would be an understatement to say that I am not the first to voice such opinions.

If we want people to read our reports carefully, we must write reports that are worth reading all the way through. If you insist on including boring, canned, confusing, and irrelevant content, consider tucking it away in an appendix.

Explain what you know, not how you know

As students we are rewarded for “showing our work.” We are encouraged to state a position and then provide data and arguments that justify our claims. The resulting literary form (the student position paper) aligns well with the objectives of the course but it rarely aligns with the purpose of psychological evaluation reports. Reports should focus on communicating to the reader something that is useful and true about an individual. Presenting observations and data and then walking the reader through the steps in our diagnostic reasoning is rarely helpful to non-specialists. Most readers need the results of our assessment (our interpretations and suggestions), not an account of our process.

My old reports are embarrassing

My earliest reports contained mini-tutorials on operant conditioning, attachment theory, psychometrics, and specific details about the tests I administered (e.g., the structure and format of WISC subtests). I naively thought that this information would be interesting and helpful to people. In retrospect, I think that writing these explanations may have helped me more than the reader. Bits and pieces of my newly acquired expertise were not fully integrated in my mind and writing everything out probably consolidated my understanding. Whatever the benefit for me, I cannot remember a time in which the inclusion of such details proved crucial to selecting the right interventions and I can remember times in which they were confusing or alienating to parents.

Bad habits I let go

Over the years, I began a long, slow process of letting go of the report templates I was given in graduate school and unlearning bad habits of my own invention.

  • I stopped talking about the names, content, and structure of tests and measures and focused on the constructs they measured. I stopped organizing my reports by test batteries and instead used a theoretical organization. If I learn something important about the evaluee’s personality during the academic achievement testing, I weave that information into the personality section (and I rarely explain how such information was obtained).
  • I stopped talking about numbers (e.g., standard scores and percentiles). Instead I describe what a person can or cannot do and why it matters. I still make extensive use of numbers in the initial stages of case conceptualization but at some point they fade into the background of the overall narrative.
  • I stopped talking about the details of my observations and simply stated the overall conclusions from my observations (combined with other data).
  • I stopped including information that was true but uninformative (e.g., the teen is left-handed but plays guitar right-handed). My “Background Information” section became the “Relevant Background Information” section. I often re-read reports after I am finished and try to remove details that clutter the overall message of the report. Often this means bucking tradition. For example, I was trained to ask about a great many details, including allergies. If a child’s allergies are so severe that they interfere with the ability to concentrate in school, they are worth reporting. However, in most cases a person’s mild allergies are not worth reporting.
  • I stopped merely reporting information (e.g., the scores may be underestimates of ability because sometimes the evaluee appeared to give up when frustrated by the hard items on a few tests) and instead focused on contextualizing and interpreting the information so that the implications are clear (e.g., outside of the testing in which situations and on which tasks is the evaluee likely to underperform and by how much?).
  • I stopped explaining why certain scores might be misleading. For example, if the WAIS-IV Arithmetic was high but other measures of working memory capacity were low (after repeated follow-up testing), I no longer explain that follow-up testing was needed, nor that at some point in the assessment process I was unsure about the person’s working memory abilities. I just explain what working memory is and why the person’s weakness matters. I do not feel the need, in most cases, to explain that the WAIS-IV Working Memory Index is inflated because of a high score on Arithmetic.
  • I stopped explaining why scores that measure the same thing are inconsistent. Non-professionals won’t understand the explanation and professionals don’t need it. If the inconsistency reveals something important (e.g., fluctuating attention), I just state what that something is and why it matters.
  • I stopped treating questionnaire data as more important and precise than interview data. I came to treat all questionnaires, no matter how long, as screeners. In most cases, I do not treat questionnaire data as a “test” that provides information that is independent of what the person said in the interview. Interview data and questionnaire data come from the same source. If the questionnaire data and the interview data are inconsistent, I interview the person until the inconsistency is resolved.
  • I stopped sourcing my data every time I made a statement. For example, I stopped writing, “On the MMPI-2 and in the interview, X reported high levels of depression. In an interview, X’s husband also reported that X had high levels of depression.” It does not usually matter where or how I obtained the information about the depression. What matters is whether the information is accurate and useful. In the narrative, I only report my final opinion of what is going on based on the totality of evidence, not the bits and pieces of information I collected along the way.
  • I stopped sourcing interview data when I was quite sure that it was correct. For example, I no longer write: “Susie’s mother reported that Susie’s reading difficulties were first noticed when she was in the first grade.” If I have every reason to believe that this is true, I simply say, “Susie’s reading difficulties were first noticed when she was in the first grade.” However, if I am uncertain that something Susie’s mother said is true or if I am reporting Susie’s mother’s opinion, I attribute the statement to her.
Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Allowing yourself to be wrong allows you to be right…eventually

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

– Stephen Hawking

It is wise to remember that you are one of those who can be fooled some of the time.

– Laurence J. Peter

We human beings are so good at pattern recognition that sometimes we find patterns that are not even there. I have never seen a cognitive profile, no matter how unusual and outlandish, that did not inspire a vivid interpretation that explained EVERYTHING about a child. In fact, the more outlandish, the better. On a few occasions, some of the anomalous scores that inspired the vivid interpretations turned out to be anomalous due to scoring errors. In these humbling experiences, I have learned something important. I noticed that in those cases, my interpretations seemed just as plausible to me as any other. If anything, I was more engaged with them because they were so interesting. Of course, there is nothing wrong with making sense of data and there is nothing wrong with doing so with a little creativity. Let your imagination soar! The danger is in taking yourself too seriously.

The scientific method is a system that saves us from our tendencies not to ask the hard questions after we have convinced ourselves of something. Put succinctly, the scientific method consists of not trusting any explanation until it survives your best efforts to kill it. There is much to be gained in reserving some time to imagine all the ways in which your interpretation might be wrong. The price of freedom is responsibility. The price of divergent thinking is prudence. It is better to be right in the end than to be right right now.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Standard
Cognitive Assessment, Principles of assessment of aptitude and achievement

Advice for psychological evaluation reports: Render abstruse jargon in the vernacular

PRIMUS DOCTOR: Most learned bachelor whom I esteem and honor, I would like to ask you the cause and reason why opium makes one sleep.

BACHELIERUS: ….The reason is that in opium resides a dormitive virtue, of which it is the nature to stupefy the senses.

—from Molière’s Le Malade Imaginaire (1673)

A man thinks that by mouthing hard words he understands hard things.

—Herman Melville

The veil of ignorance can be weaved of many threads, but the one spun with the jangly jargon of a privileged profession produces a diaphanous fabric of alluring luster and bewitching beauty. Such jargon not only impresses outsiders but comforts them with what Brian Eno called the last illusion: the belief that someone out there knows what is going on. Too often, it is a two-way illusion. Like Molière’s medical student, we psychologists fail to grasp that our (invariably Latinate) technical terms typically do not actually explain anything. There is nothing wrong with technical terms, per se; indeed, it would be hard for professionals to function without them. However, with them, it is easy to fall into logical traps and never notice. For example, saying that a child does not read well because she has dyslexia is not an explanation. It is almost a tautology, unless the time is taken to specify which precursors to reading are absent, and thus, make dyslexia an informative label.

An additional and not insubstantial benefit of using ordinary language is that you are more likely to be understood. This is not to say that your communication should be dumbed down to the point that the point is lost. Rather, as allegedly advised by Albert Einstein, “Make everything as simple as possible, but not simpler.”

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford.

Standard
Cognitive Assessment, My Software & Spreadsheets, Tutorial, Video

TableMaker for Psychological Evaluation Reports

TableMaker

I am proud to announce the release of my new computer program, TableMaker for Psychological Evaluation Reports. It is designed to help providers of psychological assessments organize and present test data in a simple, efficient, and theoretically informed manner. You enter an evaluee’s test scores in an order that is convenient to you and theoretically organized tables are generated in MS Word like so:

TableMaker is free. For now, unfortunately, it runs on Windows only.

Standard