My Software & Spreadsheets, Psychometrics, R, Statistics, Video

An easy way to simulate data according to a specific structural model.

I have made an easy-to-use Excel spreadsheet that can simulate data according to a latent structure that you specify. You do not need to know anything about R but you’ll need to install it. RStudio is not necessary but it makes life easier. In this video tutorial, I explain how to use the spreadsheet.

This project is still “in beta” so there may still be errors in it. If you find any, let me know.

If you need something with more features and is further along in its development cycle, consider simulating data with the R package simsem.

Standard
Cognitive Assessment, Psychometrics, Statistics, Tutorial, Video

Conditional normal distributions provide useful information in psychological assessment

Conditional Normal Distribution

Conditional Normal Distribution

Conditional normal distributions are really useful in psychological assessment. We can use them to answer questions like:

  • How unusual is it for someone with a vocabulary score of 120 to have a score of 90 or lower on reading comprehension?
  • If that person also has a score of 80 on a test of working memory capacity, how much does the risk of scoring 90 or lower on reading comprehension increase?

What follows might be mathematically daunting. Just let it wash over you if it becomes confusing. At the end, there is a video in which I will show how to use a spreadsheet that will do all the calculations.

Unconditional Normal Distributions

Suppose that variable Y represents reading comprehension test scores. Here is a fancy way of saying Y is normally distributed with a mean of 100 and a standard deviation of 15:

Y\sim N(100,15^2)

In this notation, “~” means is distributed as, and N means normally distributed with a particular mean (μ) and variance (σ2).

If we know literally nothing about a person from this population, our best guess is that the person’s reading comprehension score is at the population mean. One way to say this is that the person’s expected value on reading comprehension is the population mean:

E(Y)=\mu_Y = 100

The 95% confidence interval around this guess is :

95\%\, \text{CI} = \mu_Y \pm z_{95\%} \sigma_Y

95\%\, \text{CI} \approx 100 \pm 1.96*15 = 70.6 \text{ to } 129.4

Unconditional Normal Distribution with 95% CI

Unconditional Normal Distribution with 95% CI

Conditional Normal Distributions

Simple Linear Regression

Now, suppose that we know one thing about the person: the person’s score on a vocabulary test. We can let X represent the vocabulary score and its distribution is the same as that of Y:

X\sim N(100,15^2)

If we know that this person scored 120 on vocabulary (X), what is our best guess as to what the person scored on reading comprehension (Y)? This guess is a conditional expected value. It is “conditional” in the sense that the expected value of Y depends on what value X has. The pipe symbol “|” is used to note a condition like so:

E(Y|X=120)

This means, “What is our best guess for Y if X is 120?”

What if we don’t want to be specific about the value of X but want to refer to any particular value of X? Oddly enough, it is traditional to use the lowercase x for that. So, X refers to the variable as a whole and x refers to any particular value of variable X. So if I know that variable X happens to be a particular value x, the expected value of Y is:

E(Y|X=x)=\sigma_Y \rho_{XY}\dfrac{x-\mu_X}{\sigma_X}+\mu_Y

where ρXY is the correlation between X and Y.

You might recognize that this is a linear regression formula and that:

E(Y|X=x)=\hat{Y}

where “Y-hat” (Ŷ) is the predicted value of Y when X is known.

Let’s assume that the relationship between X and Y is bivariate normal like in the image at the top of the post:

\begin{bmatrix}X\\Y\end{bmatrix}\sim N\left(\begin{bmatrix}\mu_X\\ \mu_Y\end{bmatrix}\begin{matrix} \\,\end{matrix}\begin{bmatrix}\sigma_X^2&\rho_{XY}\sigma_X\sigma_Y\\ \rho_{XY}\sigma_X\sigma_Y&\sigma_X^2\end{bmatrix}\right)

The first term in the parentheses is the vector of means and the second term (the square matrix in the brackets) is the covariance matrix of X and Y. It is not necessary to understand the notation. The main point is that X and Y are both normal, they have a linear relationship, and the conditional variance of Y at any value of X is the same.

The conditional standard deviation of Y at any particular value of X is:

\sigma_{Y|X=x}=\sigma_Y\sqrt{1-\rho_{xy}^2}

This is the standard deviation of the conditional normal distribution. In the language of regression, it is the standard error of the estimate (σe). It is the standard deviation of the residuals (errors). Residuals are simply the amount by which your guesses differ from the actual values.

e = y - E(Y|X=x)=y-\hat{Y}

So,

\sigma_{Y|X=x}=\sigma_e

So, putting all this together, we can answer our question:

How unusual is it for someone with a vocabulary score of 120 to have a score of 90 or lower on reading comprehension?

The expected value of Y (Ŷ) is:

E(Y|X=120)=15\rho_{XY}\dfrac{120-100}{15}+100

Suppose that the correlation is 0.5. Therefore,

E(Y|X=120)=15*0.5\dfrac{120-100}{15}+100=110

This means that among all the people with a vocabulary score of 120, the average is 110 on reading comprehension. Now, how far off from that is 90?

e= y - \hat{Y}=90-110=-20

What is the standard error of the estimate?

\sigma_{Y|X=x}=\sigma_e=\sigma_Y\sqrt{1-\rho_{xy}^2}

\sigma_{Y|X=x}=\sigma_e=15\sqrt{1-0.5^2}\approx12.99

Dividing the residual by the standard error of the estimate (the standard deviation of the conditional normal distribution) gives us a z-score. It represents how far from expectations this individual is in standard deviation units.

z=\dfrac{e}{\sigma_e} \approx\dfrac{-20}{12.99}\approx -1.54

Using the standard normal cumulative distribution function (Φ) gives us the proportion of people scoring 90 or less on reading comprehension (given a vocabulary score of 120).

\Phi(z)\approx\Phi(-1.54)\approx 0.06

In Microsoft Excel, the standard normal cumulative distribution function is NORMSDIST. Thus, entering this into any cell will give the answer:

=NORMSDIST(-1.54)

Conditional Normal when Vocabulary = 120

Conditional normal distribution when Vocabulary = 120

Multiple Regression

What proportion of people score 90 or less on reading comprehension if their vocabulary is 120 but their working memory capacity is 80?

Let’s call vocabulary X1 and working memory capacity X2. Let’s suppose they correlated at 0.3. The correlation matrix among the predictors (RX):

\mathbf{R_X}=\begin{bmatrix}1&\rho_{12}\\ \rho_{12}&1\end{bmatrix}=\begin{bmatrix}1&0.3\\ 0.3&1\end{bmatrix}

The validity coefficients are the correlations of Y with both predictors (RXY):

\mathbf{R}_{XY}=\begin{bmatrix}\rho_{Y1}\\ \rho_{Y2}\end{bmatrix}=\begin{bmatrix}0.5\\ 0.4\end{bmatrix}

The standardized regression coefficients (β) are:

\pmb{\mathbf{\beta}}=\mathbf{R_{X}}^{-1}\mathbf{R}_{XY}\approx\begin{bmatrix}0.418\\ 0.275\end{bmatrix}

Unstandardized coefficients can be obtained by multiplying the standardized coefficients by the standard deviation of Y (σY) and dividing by the standard deviation of the predictors (σX):

\mathbf{b}=\sigma_Y\pmb{\mathbf{\beta}}/\pmb{\mathbf{\sigma}}_X

However, in this case all the variables have the same metric and thus the unstandardized and standardized coefficients are the same.

The vector of predictor means (μX) is used to calculate the intercept (b0):

b_0=\mu_Y-\mathbf{b}' \pmb{\mathbf{\mu}}_X

b_0\approx 100-\begin{bmatrix}0.418\\ 0.275\end{bmatrix}^{'} \begin{bmatrix}100\\ 100\end{bmatrix}\approx 30.769

The predicted score when vocabulary is 120 and working memory capacity is 80 is:

\hat{Y}=b_0 + b_1 X_1 + b_2 X_2

\hat{Y}\approx 30.769+0.418*120+0.275*80\approx 102.9

The error in this case is 90-102.9=-12.9:

The multiple R2 is calculated with the standardized regression coefficients and the validity coefficients.

R^2 = \pmb{\mathbf{\beta}}'\pmb{\mathbf{R}}_{XY}\approx\begin{bmatrix}0.418\\ 0.275\end{bmatrix}^{'} \begin{bmatrix}0.5\\ 0.4\end{bmatrix}\approx0.319

The standard error of the estimate is thus:

\sigma_e=\sigma_Y\sqrt{1-R^2}\approx 15\sqrt{1-0.319^2}\approx 12.38

The proportion of people with vocabulary = 120 and working memory capacity = 80 who score 90 or less is:

\Phi\left(\dfrac{e}{\sigma_e}\right)\approx\Phi\left(\dfrac{-12.9}{12.38}\right)\approx 0.15

Here is a spreadsheet that automates these calculations.

Multiple Regression Spreadsheet

Multiple Regression Spreadsheet

I explain how to use this spreadsheet in this YouTube video:

Standard
Cognitive Assessment, Death Penalty, Psychometrics, Statistics, Tutorial, Uncategorized, Video

Video Tutorial: Misunderstanding Regression to the Mean

One of the most widely misunderstood statistical concepts is regression to the mean. In this video tutorial, I address common false beliefs about regression to the mean and answer the following questions:

  1. What is regression to the mean?
  2. Do variables become less variable each time they are measured?
  3. Does regression to the mean happen all the time or just in certain situations?
  4. Does repeated testing cause people to come closer and closer to the mean?
  5. How is regression to the mean relevant in death penalty cases?

Standard
Cognitive Assessment, My Software & Spreadsheets, Psychometrics, Tutorial, Uncategorized, Video

Estimating Latent Scores in Individuals

How to estimate latent scores in individuals when there is a known structural model:

I wrote a commentary in a special issue of the Journal Psychoeducational Assessment. My article proposes a new way to interpret cognitive profiles. The basic idea is to use the best available latent variable model of the tests and then estimate an individual’s latent scores (with confidence intervals around those estimates). I have made two spreadsheets available, one for the WISC-IV and one for the WAIS-IV.

Five-Factor Model of the WISC-IV

Four-Factor Model of the WAIS-IV

I decided not to provide a spreadsheet for the five-factor model of the WAIS-IV because Gf and g were so highly correlated in that model that it would be nearly impossible to distinguish between Gf and g in individuals. You can think of Gf and g as nearly synonymous (at the latent level).

Schneider, W. J. (2013). What if we took our models seriously? Estimating latent scores in individuals. Journal of Psychoeducational Assessment, 31, 186–201.

Standard
Cognitive Assessment, My Software & Spreadsheets, Psychometrics, Psychometrics from the Ground Up, Tutorial, Uncategorized, Video

Psychometrics from the Ground Up 9: Standard Scores and Why We Need Them

In this video tutorial, I explain why we have standard scores, why there are so many different kinds of standard scores, and how to convert between any two types of standard scores.

Here is my Excel spreadsheet that converts any type of standard score to any other type.

Standard