Statistics

# Two visualizations for explaining “variance explained”

In my introductory statistics class, I feel uneasy when I have explain what variance explained means. The term has two things I don’t like. First, I don’t like variance very much. I feel much more comfortable with standard deviations. I understand that at a deep level variance is a more fundamental concept than the standard deviation. However, variance is a poor descriptive statistic because there is no direct visual analog for variance in a probability distribution plot. In contrast, the standard deviation illustrates very clearly how much scores typically deviate from the mean. So, variance explained is hard to grasp in part because variance is hard to visualize.

The second thing I don’t like about variance explained is the whole “explained” business. As I mentioned in my last post, variance explained does not actually mean that we have explained anything, at least in a causal sense. That is, it does not imply that we know what is going on. It simply means that we can use one or more variables to predict things more accurately than before.

In many models, if X is correlated with Y, X can be said to “explain” variance in Y even though X does not really cause Y. However, in some situations the term variance explained is accurate in every sense:

X causes Y

In the model above, the arrow means that X really is a partial cause of Y. Why does Y vary? Because of variability in X, at least in part. In this example, 80% of Y’s variance is due to X, with the remaining variance due to something else (somewhat misleadingly termed error). It is not an “error” in that something is wrong or that someone is making a mistake. It is merely that which causes our predictions of Y to be off. Prediction error is probably not a single variable. It it likely to be the sum total of many influences.

Because X and error are uncorrelated z-scores in this example, the path coefficients are equal to the correlations with Y. Squaring the correlation coefficients yields the variance explained. The coefficients for X and error are actually the square roots of .8 and .2, respectively. Squaring the coefficients tells us that X explains 80% of the variance in Y and error explains the rest.

# Visualizing Variance Explained

Okay, if X predicts Y, then the variance explained is equal to the correlation coefficient squared. Unfortunately, this is merely a formula. It does not help us understand what it means. Perhaps this visualization will help:

Variance Explained

If you need to guess every value of Y but you know nothing about Y except that it has a mean of zero, then you should guess zero every time. You’ll be wrong most of the time, but pursuing other strategies will result in even larger errors. The variance of your prediction errors will be equal to the variance of Y. In the picture above, this corresponds to a regression line that passes through the mean of Y and has a slope of zero. No matter what X is, you guess that Y is zero. The squared vertical distance from Y to the line is represented by the translucent squares. The average area of the squares is the variance of Y.

If you happen to know the value of X each time you need to guess what Y will be, then you can use a regression equation to make a better guess. Your prediction of Y is called Y-hat (Ŷ):

$\hat{Y}=b_0+b_1X=0+\sqrt{0.80}X\approx 0.89X$

When X and Y have the same variance, the slope of the regression line is equal to the correlation coefficient, 0.89. The distance from Ŷ (the predicted value of Y) to the actual value of Y is the prediction error. In the picture above, the variance of the prediction errors (0.2) is the average size of the squares when the slope is equal to the correlation coefficient.

Thus, when X is not used to predict Y, our prediction errors have a variance of 1. When we do use X to predict Y, the average size of the prediction errors shrinks from 1 to 0.2, an 80% reduction. This is what is meant when we say that “X explains 80% of the variance in Y.” It is the proportion by which the variance of the prediction errors shrinks.

# An alternate visualization

Suppose that we flip 50 coins and record how many heads there are. We do this over and over. The values we record constitute the variable Y. The number of heads we get each time we flip a coin happens to have a binomial distribution. The mean of a binomial distribution is determined by the probability p of an event occurring on a single trial (i.e., getting a head on a single toss) and the number of events k (i.e., the number of coins thrown). As k increases, the binomial distribution begins to resemble the normal distribution. The probability p of getting a head on any one coin toss is 0.5 and the number of coins k is 50. The mean number of heads over the long run is:

$\mu = pk=0.5*50=25$

The variance of the binomial distribution:

$\sigma^2 = p(1-p)k=0.5*(1-0.5)*50=12.5$

Before we toss the coins, we should guess that we will toss an average number of heads, 25. We will be wrong much of the time but our prediction errors will be as small as they can be, over the long run. The variance of our prediction errors is equal to the variance of Y, 12.5.

Now suppose that after tossing 80% of our coins (i.e., 40 coins), we count the number of heads. This value is recorded as variable X. The remaining 20% of the coins (10 coins) are then tossed and the total number of heads is counted from all 50 coins. We can use a regression equation to predict Y from X. The intercept will be the mean number of heads from the remaining 10 coins:

$\hat{Y} = b_0+b_1X=5+X$

In the diagram below, each peg represents a coin toss. If the outcome is heads, the dot moves right. If the outcome is tails, the dot moves left. The purple line represents the probability distribution of Y before any coin has been tossed.

X explains 80% of the variance in Y.

When the dot gets to the red line (after 40 tosses or 80% of the total), we can make a new guess as to what Y is going to be. This conditional distribution is represented by a blue line. The variance of the conditional distribution has a mean equal to Ŷ, with a variance of 2.5 (the variance of the 10 remaining coins).

The variability in Y is caused by the outcomes of 50 coin tosses. If 80% of those coins are the variable X, then X explains 80% of the variance in Y. The remaining 10 coins represent the variability of Y that is not determined by X (i.e., the error term). They determine 20% of the variance in Y.

If X represented only the first 20 of 50 coins, then X would explain 40% of the variance in Y.

X explains 40% of the variance in Y.

Standard
Statistics

# Unfortunate statistical terms

I like most of the technical terms we use in statistics. However, there are a few of them that I wish were easier to teach and remember. Many others have opined on such matters. This is my list of complaints:

• Statistical significance: This term is so universally hated I am surprised that we haven’t held a convention and banned its use. How many journalists have been mislead by researchers’ technical use of significance? I wish we said something like “not merely random” or “probably not zero.”
• Type I/Type II error: It is hard to remember which is which because the terms don’t convey any clues as to what they mean. I wish more informative metaphors were used such as false hit and false miss.
• Power: Statistical power refers to the probability that the null hypothesis will be rejected, provided that the null hypothesis is false. The term is not self-explanatory and requires memorization! I wish we used a better term such as true hit rate or false null rejection rate. While we’re at it, α and β are not much better. False hit rate (or true null rejection rate) and false miss rate (or false null retention rate) would be easier to remember.
• Prediction error: The word error in English typically refers to an action that results in harm that could have been avoided if better choices had been made. In the context of statistical models, prediction errors are what you get wrong even though you have done everything right! I wish there were a word that referred to actions that were done in good faith yet resulted in unforeseeable harm. In this case, we already have a perfectly good substitute term that is widely used: disturbance. I suppose that the connotations of disturbance could generate different misunderstandings but in my estimation they are not as bad as those generated by error. I wish that we could just use the term residuals but that refers to something slightly different: the estimate of an error (residual:error::statistic:parameter). We can only know the errors if we know the true model parameters.
• Variance explained: This term works if the predictor is a cause of the criterion variable. However, when it is simply a correlate, it misleadingly suggests that we now understand what is going on. I wish the term were something more neutral such as variance predicted.
• Moderator/Mediator: At least in English, these terms sound so much alike that they are easily confused. I think that we should dump moderator along with related terms interaction effect, simple main effect, and simple slope. I think that the term conditional effects is more descriptive and straightforward.
• Biased: This word is hard to use in its technical sense when talking to non-statisticians. It sounds like we are talking about bigoted statistics! Unfortunately I can’t think of good alternative to it (though I can think of some awkward ones like stable inaccuracy).
• Degrees of freedom: For me, this concept is extremely difficult to explain properly in an introductory course. Students are confused about what degrees have to do with it (or for that matter, freedom). I don’t know if I have a good replacement term (independent dimensions? non-redundancy index? matrix rank?).
• True score: This term sounds like it refers to the Aristotelian truth when in fact it is merely the long-term average score if there were no carryover effects of repeated measurement. Thus, a person’s true score on one IQ test might be quite different from the same person’s true score on another IQ test. Neither true score refers to the person’s “true cognitive ability.” To avoid this confusion, I would prefer something like the individual expected value, or IEV for short.
• Reliability: In typical usage, reliability refers to morally desirable traits such as trustworthiness and truthfulness. When statisticians refer to the reliability of scores or experimental results, to the untrained ear it probably sounds like we are talking about validity. I would prefer to talk about stability, consistency, or precision instead.

I am sure that there are many more!

Standard

# How unusual is it to have multiple scores below a threshold?

In psychological assessment, it is common to specify a threshold at which a score is considered unusual (e.g., 2 standard deviations above or below the mean). If we can assume that the scores are roughly normal, it is easy to estimate the proportion of people with scores below the threshold we have set. If the threshold is 2 standard deviations below the mean, then the Excel function NORMSDIST will tell us the answer:

=NORMSDIST(-2)

=0.023

In R, the pnorm function gives the same answer:

pnorm(-2)

How unusual is it to have multiple scores below the threshold? The answer depends on how correlated the scores are. If we can assume that the scores are multivariate normal, Crawford and colleagues (2007) show us how to obtain reasonable estimates using simulated data. Here is a script in R that depends on the mvtnorm package. Suppose that the 10 subtests of the WAIS-IV have correlations as depicted below. Because the subtests have a mean of 10 and a standard deviation of 3, the scores are unusually low if 4 or lower.

#WAIS-IV subtest names
WAISSubtests <- c("BD", "SI", "DS", "MR", "VO", "AR", "SS", "VP", "IN", "CD")

# WAIS-IV correlations
WAISCor <- rbind(
c(1.00,0.49,0.45,0.54,0.45,0.50,0.41,0.64,0.44,0.40), #BD
c(0.49,1.00,0.48,0.51,0.74,0.54,0.35,0.44,0.64,0.41), #SI
c(0.45,0.48,1.00,0.47,0.50,0.60,0.40,0.40,0.43,0.45), #DS
c(0.54,0.51,0.47,1.00,0.51,0.52,0.39,0.53,0.49,0.45), #MR
c(0.45,0.74,0.50,0.51,1.00,0.57,0.34,0.42,0.73,0.41), #VO
c(0.50,0.54,0.60,0.52,0.57,1.00,0.37,0.48,0.57,0.43), #AR
c(0.41,0.35,0.40,0.39,0.34,0.37,1.00,0.38,0.34,0.65), #SS
c(0.64,0.44,0.40,0.53,0.42,0.48,0.38,1.00,0.43,0.37), #VP
c(0.44,0.64,0.43,0.49,0.73,0.57,0.34,0.43,1.00,0.34), #IN
c(0.40,0.41,0.45,0.45,0.41,0.43,0.65,0.37,0.34,1.00)) #CD
rownames(WAISCor) <- colnames(WAISCor) <- WAISSubtests

#Means
WAISMeans<-rep(10,length(WAISSubtests))

#Standard deviations
WAISSD<-rep(3,length(WAISSubtests))

#Covariance Matrix
WAISCov<-WAISCor*WAISSD%*%t(WAISSD)

#Sample size
SampleSize<-1000000

library(mvtnorm)

#Make simulated data
d<-rmvnorm(n=SampleSize,mean=WAISMeans,sigma=WAISCov)
#To make this more realistic, you can round all scores to the nearest integer (d<-round(d))

#Threshold for abnormality
Threshold<-4

#Which scores are less than or equal to threshold
Abnormal<- d<=Threshold

#Number of scores less than or equal to threshold
nAbnormal<-rowSums(Abnormal)

#Frequency distribution table
p<-c(table(nAbnormal)/SampleSize)

#Plot
barplot(p,axes=F,las=1,
xlim=c(0,length(p)*1.2),ylim=c(0,1),
bty="n",pch=16,col="royalblue2",
xlab="Number of WAIS-IV subtest scores less than or equal to 4",
ylab="Proportion")
axis(2,at=seq(0,1,0.1),las=1)
text(x=0.7+0:10*1.2,y=p,labels=formatC(p,digits=2),cex=0.7,pos=3,adj=0.5)

The code produces this graph:

# Using the multivariate normal distribution

The simulation method works very well, especially if the sample size is very large. An alternate method that gives more precise numbers is to estimate how much of the multivariate normal distribution is within certain bounds. That is, we find all of the regions of the multivariate normal distribution in which one and only one test is below a threshold and then add up all the probabilities. The process is repeated to find all regions in which two and only two tests are below a threshold. Repeat the process, with 3 tests, 4 tests, and so on. This is tedious to do by hand but only takes a few lines of code do automatically.

AbnormalPrevalance<-function(Cor,Mean=0,SD=1,Threshold){
require(mvtnorm)
k<-nrow(Cor)
p<-rep(0,k)
zThreshold<-(Threshold-Mean)/SD
for (n in 1:k){
combos<-combn(1:k,n)
ncombos<-ncol(combos)
for (i in 1:ncombos){
u<-rep(Inf,k)
u[combos[,i]]<-zThreshold
l<-rep(-Inf,k)
l[seq(1,k)[-combos[,i]]]<-zThreshold
p[n]<-p[n]+pmvnorm(lower=l,upper=u,mean=rep(0,k),sigma=Cor)[1]
}
}
p<-c(1-sum(p),p)
names(p)<-0:k

barplot(p,axes=F,las=1,xlim=c(0,length(p)*1.2),ylim=c(0,1),
bty="n",pch=16,col="royalblue2",
xlab=bquote("Number of scores less than or equal to " * .(Threshold)),
ylab="Proportion")
axis(2,at=seq(0,1,0.1),las=1)
return(p)
}
Proportions<-AbnormalPrevalance(Cor=WAISCor,Mean=10,SD=3,Threshold=4)

Using this method, the results are nearly the same but slightly more accurate. If the number of tests is large, the code can take a long time to run.

Standard

# Using the multivariate truncated normal distribution

In a previous post, I imagined that there was a gifted education program that had a strictly enforced selection procedure: everyone with an IQ of 130 or higher is admitted. With the (univariate) truncated normal distribution, we were able to calculate the mean of the selected group (mean IQ = 135.6).

# Multivariate Truncated Normal Distributions

Reading comprehension has a strong relationship with IQ $(\rho\approx 0.70)$. What is the average reading comprehension score among students in the gifted education program? If we can assume that reading comprehension is normally distributed $(\mu=100, \sigma=15)$ and the relationship between IQ and reading comprehension is linear $(\rho=0.70)$, then we can answer this question using the multivariate truncated normal distribution. Portions of the multivariate normal distribution have been truncated (sliced off). In this case, the blue portion of the bivariate normal distribution of IQ and reading comprehension has been sliced off. The portion remaining (in red), is the distribution we are interested in. Here it is in 3D:

Bivariate normal distribution truncated at IQ = 130

Here is the same distribution with simulated data points in 2D:

Expected values of IQ and reading comprehension when IQ ≥ 130

# Expected Values

In the picture above, the expected value (i.e., mean) for the IQ of the students in the gifted education program is 135.6. In the last post, I showed how to calculate this value.

The expected value (i.e., mean) for the reading comprehension score is 124.9. How is this calculated? The general method is fairly complicated and requires specialized software such as the R package tmvtnorm. However in the bivariate case with a single truncation, we can simply calculate the predicted reading comprehension score when IQ is 135.6:

$\dfrac{\hat{Y}-\mu_Y}{\sigma_Y}=\rho_{XY}\dfrac{X-\mu_X}{\sigma_X}$

$\dfrac{\hat{Y}-100}{15}=0.7\dfrac{135.6-100}{15}$

$\hat{Y}=124.9$

In R, the same answer is obtained via the tmvtnorm package:

library(tmvtnorm)
#Variable names

#Vector of Means
mu<-c(100,100)
names(mu)<-vNames;mu

#Vector of Standard deviations
sigma<-c(15,15)
names(sigma)<-vNames;sigma

#Correlation between IQ and Reading Comprehension
rho<-0.7

#Correlation matrix
R<-matrix(c(1,rho,rho,1),ncol=2)
rownames(R)<-colnames(R)<-vNames;R

#Covariance matrix
C<-diag(sigma)%*%R%*%diag(sigma)
rownames(C)<-colnames(C)<-vNames;C

#Vector of lower bounds (-Inf means negative infinity)
a<-c(130,-Inf)

#Vector of upper bounds (Inf means positive infinity)
b<-c(Inf,Inf)

#Means and covariance matrix of the truncated distribution
m<-mtmvnorm(mean=mu,sigma=C,lower=a,upper=b)
rownames(m$tvar)<-colnames(m$tvar)<-vNames;m

#Means of the truncated distribution
tmu<-m$tmean;tmu #Standard deviations of the truncated distribution tsigma<-sqrt(diag(m$tvar));tsigma

#Correlation matrix of the truncated distribution
tR<-cov2cor(m\$tvar);tR


In running the code above, we learn that the standard deviation of reading comprehension has shrunk from 15 in the general population to 11.28 in the truncated population. In addition, the correlation between IQ and reading comprehension has shrunk from 0.70 in the general population to 0.31 in the truncated population.

# Marginal cumulative distributions

Among the students in the gifted education program, what proportion have reading comprehension scores of 100 or less? The question can be answered with the marginal cumulative distribution function. That is, what proportion of the red truncated region is less than 100 in reading comprehension? Assuming that the code in the previous section has been run already, this code will yield the answer of about 1.3%:

#Proportion of students in the gifted program with reading comprehension of 100 or less
ptmvnorm(lowerx=c(-Inf,-Inf),upperx=c(Inf,100),mean=mu,sigma=C,lower=a,upper=b)

The mean, sigma, lower, and upper parameters define the truncated normal distribution. The lowerx and the upperx parameters define the lower and upper bounds of the subregion in question. In this case, there are no restrictions except an upper limit of 100 on the second axis (the Y-axis).

If we plot the cumulative distribution of reading comprehension scores in the gifted population, it is close to (but not the same as) that of the conditional distribution of reading comprehension at IQ = 135.6.

Marginal cumulative distribution function of the truncated bivariate normal distribution

# What proportion does the truncated distribution occupy in the untruncated distribution?

Imagine that in order to qualify for services for intellectual disability, a person must score 70 or below on an IQ test. Every three years, the person must undergo a re-evaluation. Suppose that the correlation between the original test and the re-evaluation test is $\rho=0.90$. If the entire population were given both tests, what proportion would score 70 or lower on both tests? What proportion would score below 70 on the first test but not on the second test? Such questions can be answered with the pmvnorm function from the mvtnorm package (which is a prerequiste of the tmvtnorm package and this thus already loaded if you ran the previous code blocks).

library(mvtnorm)
#Means
IQmu<-c(100,100)

#Standard deviations
IQsigma<-c(15,15)

#Correlation
IQrho<-0.9

#Correlation matrix
IQcor<-matrix(c(1,IQrho,IQrho,1),ncol=2)

#Covariance matrix
IQcov<-diag(IQsigma)%*%IQcor%*%diag(IQsigma)

#Proportion of the general population scoring 70 or less on both tests
pmvnorm(lower=c(-Inf,-Inf),upper=c(70,70),mean=IQmu,sigma=IQcov)

#Proportion of the general population scoring 70 or less on the first test but not on the second test
pmvnorm(lower=c(-Inf,70),upper=c(70,Inf),mean=IQmu,sigma=IQcov)

What are the means of these truncated distributions?

#Mean scores among people scoring 70 or less on both tests
mtmvnorm(mean=IQmu,sigma=IQcov,lower=c(-Inf,-Inf),upper=c(70,70))

#Mean scores among people scoring 70 or less on the first test but not on the second test
mtmvnorm(mean=IQmu,sigma=IQcov,lower=c(-Inf,70),upper=c(70,Inf))


Combining this information in a plot:

Thus, we can see that the multivariate truncated normal distribution can be used to answer a wide variety of questions. With a little creativity, we can apply it to many more kinds of questions.

Standard

# Using the truncated normal distribution

The term truncated normal distribution may sound highly technical but it is actually fairly simple and has many practical applications. If the math below is daunting, be assured that it is not necessary to understand the notation and the technical details. I have created a user-friendly spreadsheet that performs all the calculations automatically.

# The mean of a truncated normal distribution

Imagine that your school district has a gifted education program. All students in the program have an IQ of 130 or higher. What is the average IQ of this group? Assume that in your school district, IQ is normally distributed with a mean of 100 and a standard deviation of 15.

Questions like this one can be answered by calculating the mean of the truncated normal distribution. The truncated normal distribution is a normal distribution in which one or both ends have been sliced off (i.e., truncated). In this case, everything below 130 has been sliced off (and there is no upper bound).

Four parameters determine the properties of the truncated normal distribution:

μ = mean of the normal distribution (before truncation)
σ = standard deviation of the normal distribution (before truncation)
a = the lower bound of the distribution (can be as low as −∞)
b = the upper bound of the distribution (can be as high as +∞)

The formula for the mean of a truncated distribution is a bit of a mess but can be simplified by finding the z-scores associated with the lower and upper bounds of the distribution:

$z_a=\dfrac{a-\mu}{\sigma}$

$z_b=\dfrac{b-\mu}{\sigma}$

The expected value of the truncated distribution (i.e., the mean):
$E(X)=\mu+\sigma\dfrac{\phi(z_a)-\phi(z_b)}{\Phi(z_b)-\Phi(z_a)}$

Where $\phi$ is the probability density function of the standard normal distribution (NORMDIST(z,0,1,FALSE) in Excel, dnorm(z) in R) and $\Phi$ is the cumulative distribution function of the standard normal distribution (NORMSDIST(z) in Excel, pnorm(z) in R).

This spreadsheet calculates the mean (and standard deviation) of a truncated distribution. See the part below the plot that says “Truncated Normal Distribution.”

In R you could make a function to calculate the mean of a truncated distribution like so:

MeanNormalTruncated<-function(mu=0,sigma=1,a=-Inf,b=Inf){
mu+sigma*(dnorm((a-mu)/sigma)-dnorm((b-mu)/sigma))/(pnorm((b-mu)/sigma)-pnorm((a-mu)/sigma))
}

#Example: Find the mean of a truncated normal distribution with a mu = 100, sigma = 15, and lower bound = 130
MeanNormalTruncated(mu=100,sigma=15,a=130)

# The cumulative distribution function of the truncated normal distribution

Suppose that we wish to know the proportion of students in the same gifted education program who score 140 or more. The cumulative truncated normal distribution function tells us the proportion of the distribution that is less than a particular value.

$cdf=\dfrac{\Phi(z_x)-\Phi(z_a)}{\Phi(z_b)-\Phi(z_a)}$

Where $z_x = \dfrac{X-\mu}{\sigma}$

In the previously mentioned spreadsheet, the cumulative distribution function is the proportion of the shaded region that is less than the value you specify.

You can create your own cumulative distribution function for the truncated normal distribution in R like so:

cdfNormalTruncated<-function(x=0,mu=0,sigma=1,a=-Inf,b=Inf){
(pnorm((x-mu)/sigma)-pnorm((a-mu)/sigma))/(pnorm((b-mu)/sigma)-pnorm((a-mu)/sigma))
}
#Example: Find the proportion of the distribution less than 140
cdfNormalTruncated(x=140,mu=100,sigma=15,a=130)

In this case, the cumulative distribution function returns approximately 0.8316. Subtracting from 1, gives the proportion of scores 140 and higher: 0.1684. This means that about 17% of students in the gifted program can be expected to have IQ scores of 140 or more.1

# The truncated normal distribution in R

A fuller range of functions related to the truncated normal distribution can be found in the truncnorm package in R, including the expected value (mean), variance, pdf, cdf, quantile, and random number generation functions.

1 In the interest of precision, I need to say that because IQ scores are rounded to the nearest integer, a slight adjustment needs to be made. The true lower bound of the truncated distribution is not 130 but 129.5. Furthermore, we want the proportion of scores 139.5 and higher, not 140 and higher. This means that the expected proportion of students with IQ scores of “140” and higher in the gifted program is about 0.1718 instead of 0.1684. Of course, there is little difference between these estimates and such precision is not usually needed for “back-of-the-envelope” estimates such as this one.
Standard

# Viewing correlation from a different angle

The typical way that we display correlated data is that we plot the points on an XY plane. The data are correlated to the degree to which the points are contained within a narrow, slanted ellipse.

Correlation with Orthogonal Axes

I believe that this is, in fact, the most intuitive way to display correlated data. However, there is an alternate way of doing it that yields interesting insights.

# Oblique Axes

In the plot above, the X and Y axes are orthogonal (at a right angle). However, we can make scatterplots in which the axes are oblique (not orthogonal). This is hard to think about at first but after a while it makes sense. No matter what, the points are at the intersection of two vectors perpendicular to the axes. For example, point A (2,2) and point B (1,3) can be displayed with oblique axes like so:

Orthogonal vs. Oblique Axes

If we make the cosine of the angle between X and Y axes to equal the correlation coefficient, something interesting happens. Suppose that X and Y are normally distributed z-scores with a correlation of 0.8. When the cosine of the angle between the axes equals the correlation coefficient, the data appear to be contained in a circle rather than in an ellipse.

Correlated data with oblique axes

What is the value of this way of looking at correlations? There are many insights to be had but for now I will focus on two. First, partially correlated data are partially redundant. Viewing the data with oblique axes gives us an alternate way of seeing how redundant the information provided by the two variables is. Second, viewing the data with oblique axes gives an idea as to what is happening with principal components analysis.

# Oblique Axes and Principal Components Analysis

Principal components analysis takes our data and summarizes it in the most economical way possible. With only 2 correlated variables, the first principal component is a summary of overall elevation of the 2 scores. If X and Y both equal 2 (and the correlation is 0.8), the score on the first principal component is about 2.11 (which, like all composite scores,  is slightly more extreme than the weighted average of its parts).

In the plot above, the first principal component (PC1) is the red vector that bisects X and Y. The cosine of the angle between PC1 and the X-axis is X’s correlation with PC1 (also known as X’s loading on PC1). Because there are only two variables, X and Y have equal loadings on PC1.

The second principal component (PC2) is orthogonal to the first principal component. The meaning of PC2 depends on how many variables there are and their structure. In the case of two positively correlated variables, PC2 is a summary of the magnitude of the difference between the scores. If X = 2 and Y = 1, they differ by 1 standard score. If X and Y are highly correlated, this is a large difference and the score on PC2 would be large. If X and Y have a low correlation, this difference is not so large and the score on PC2 is more modest.

# Oblique Axes and the Mahalanobis Distance

The Mahalanobis distance is a measure of how unusual a profile of scores is in a particular population. Shown with oblique axes, the Mahalanobis distance is simply the distance of the point to the origin (at the population mean). Suppose that X and Y have correlation of 0.90. As shown below, if X is 1 standard deviation above the mean and Y is 1 standard deviations below the mean, the Mahalanobis distance for this point is going to be large (4.5).

For multivariate k normal variables, the Mahalanobis distance has a χ distribution with k degrees of freedom (the χ distribution occurs when you take the square root of every value in the more well known χ2 distribution). In the χ distribution with 2 degrees of freedom, a value of 4.5 is greater than 99.95% of values. Thus, (-1,1) is a quite unusual pair of scores if the z-scores correlate at ρ = 0.90

Mahalanobis Distance of an atypical set of scores

If both X and Y are 1 standard deviation above the mean, the Mahalanobis distance would be fairly small (1.03). In the χ distribution with 2 degrees of freedom, a value of 1.03 is greater than only 39% of values, making this a fairly typical pair of scores.

More typical scores

Standard

# Difference scores, the absolute deviation, and the half-normal distribution

In psychological assessment, sometimes we want to contrast two scores. For example, suppose we give two tests of visual-spatial ability to an individual. On Test A the score was 95 and on Test B the score was 75.

Two tests of visual-spatial ability differ by 20 points.

Both tests are measured with the index score metric (mean = 100, SD = 15). Because these tests are intended to measure the same ability, we are surprised to see that they differ by 20 points (20 index score points = 1⅓ standard deviations). How common is it for tests that allegedly measure the same thing to differ by 20 points or more?

The answer, of course, depends on the distributions of both variables and the form of the relationship between the two variables. In this case, let’s assume that the tests are multivariate normal, meaning that both variables have normal distributions and any linear combination of the two scores (included subtracting the scores) is also normal.

A Bivariate Normal Distribution with a correlation of 0.6

The relationship between the two variables is linear. Linear relationships are fully described by correlation coefficients. In this case, suppose that the correlation coefficient is 0.6.

Few variables found in nature have a true multivariate normal distribution. However, multivariate normal distributions describe cognitive ability data reasonably well.

# The mean of a difference score

The mean of the sum of two variables is the sum of the two means. That is,

$\mu_{A + B} = \mu_A + \mu_B=100+100=200$

It works the same way with subtraction:

$\mu_{A - B} = \mu_A - \mu_B=100-100=0$

# The standard deviation of a difference score

The standard deviation of the sum of two variables is the square root of the sum of the two variables’ covariance matrix. The covariance matrix is:

$\begin{matrix} & \text{A} & \text{B} \\ \text{A} & \sigma_A^2 & \sigma_{AB} \\ \text{B} & \sigma_{AB} & \sigma_B^2 \end{matrix}$

The sum of the covariance matrix is:

$\sigma_{A+B}=\sqrt{ \sigma_{A}^2 + 2\sigma_{AB} + \sigma_{B}^2}$

The covariance is the product of the two standard deviations and the correlation (ρ):

$\sigma_{AB}=\sigma_A \sigma_B \rho_{AB}$

Thus,

$\sigma_{A+B}=\sqrt{15^2+2*15*15*0.6+15^2}\approx 26.83$

The standard deviation of the difference of two variables is the same except that the covariance is negative.

$\sigma_{A-B}=\sqrt{ \sigma_{A}^2 - 2\sigma_{AB} + \sigma_{B}^2}$

$\sigma_{A-B}=\sqrt{15^2-2*15*15*0.6+15^2}\approx13.42$

# The prevalence of a difference score

If the two variables are multivariate normal, then the difference score is also normally distributed. The difference of A and B in this example is:

$A-B=95-75=20$

The population mean of the difference scores is 0 and the standard deviation is 13.24.

Using the z-score formula,

$z=\dfrac{X-\mu}{\sigma}=\dfrac{20-0}{13.42}\approx 1.49$

The cumulative distribution function of the standard normal distribution (Φ) is the proportion of scores to the left of a particular z-score. In Excel, the Φ function is the NORMSDIST function.

$\Phi(1.49)=\texttt{NORMSDIST}(1.49)\approx 0.93$

Thus about 7% (1−0.93=0.07) of people have a difference score of 20 or more in this particular direction and about 14% have difference score of 20 or more in either direction. Thus, in this case, a difference of 20 points or more is only somewhat unusual.

# The absolute deviation

The standard deviation is a sort of average deviation but it is not the arithmetic mean of the deviations. If you really want to know the average (unsigned) deviation, then you want the absolute deviation. Technically, the absolute deviation is the expected value of the absolute value of the deviation:

$\text{Absolute Deviation}=E(|X-\mu|)$

Sometimes the absolute deviation is the calculated as the average deviation from the median instead of from the mean. In the case of the normal distribution, this difference does not matter because the mean and median are the same.

In the normal distribution, the absolute deviation is about 80% as large as the standard deviation. Specifically,

$\text{Absolute Deviation}=\sqrt{\dfrac{2}{\pi}}\sigma$

# The absolute deviation of a difference score

If the two variables are multivariate normal, the difference score is also normal. We calculate the standard deviation of the difference score and multiply it by the square root of 2 over pi. In this case, the standard deviation of the difference score was about 13.42. Thus, the average difference score is:

$\sqrt{\dfrac{2}{\pi}}13.42\approx 10.70$

# Why use the absolute deviation?

The standard deviation is the standard way of describing variability. Why would we use this obscure type of deviation then? Well, most people have not heard of either kind of deviation. For people who have never taken a statistics course, it is very easy to talk about the average difference score (i.e., the absolute deviation). For example, “On average, these two scores differ by 11 points.” See how easy that was?

In contrast, imagine saying to statistically untrained people, “The standard deviation is the square root of the average squared difference from the population mean. In this case it is 13 points.” Sure, this explanation can be made simpler…but at the expense of accuracy.

The absolute deviation can be explained easily AND accurately.

# The half-normal distribution

Related to the idea of the absolute deviation is the half-normal distribution. The half-normal distribution occurs when we take a normally distributed variable and take the absolute value of all the deviations.

$Y=|X-\mu_X|$

To visualize the half-normal distribution, we divide the normal distribution in half at the mean and then stack the left side of the distribution on top of the right side. For example, suppose that we have a standard normal distribution and we divide the distribution in half like so:

The standard normal distribution divided at the mean

Next we flip the red portion and stack it on top of the blue portion like so:

The half-normal distribution is normal distribution folded in half, with the two halves stacked on top of each other.

What is the mean of the half-normal distribution? Yes, you guessed it—the absolute deviation of the normal distribution!

The cumulative distribution function of the half-normal distribution is:

$cdf_{\text{half-normal}}=\texttt{erf}\left(\dfrac{X}{\sqrt{2\sigma^2}}\right)$

In Excel the ERF function is the error function. Thus,

=ERF(20/SQRT(2*13.42^2))

=0.86

This means that about 86% of people have a difference score (in either direction) of 20 or less. About 14% have a difference score of 20 or more. Note that this is the same answer we found before using the standard deviation of the difference score.

Standard

# An easy way to simulate data according to a specific structural model.

I have made an easy-to-use Excel spreadsheet that can simulate data according to a latent structure that you specify. You do not need to know anything about R but you’ll need to install it. RStudio is not necessary but it makes life easier. In this video tutorial, I explain how to use the spreadsheet.

This project is still “in beta” so there may still be errors in it. If you find any, let me know.

If you need something with more features and is further along in its development cycle, consider simulating data with the R package simsem.

Standard

# Cronbach: Factor analysis is more like photography than chemistry.

Lee Cronbach would later achieve immortality for his methodological contributions (e.g., coefficient α, construct validity, aptitude by treatment interactions, and generalizability theory). His first big splash, though, was a 1949 textbook Essentials of Psychological Testing. Last week I was reading the 1960 edition of his textbook and found this skillfully worded comparison:

“Factor analysis is in no sense comparable to the chemist’s search for elements. There is only one answer to the question: What elements make up table salt? In factor analysis there are many answers, all equally true but not equally satisfactory (Guttman, 1955). The factor analyst may be compared to the photographer trying to picture a building as revealingly as possible. Wherever he sets his camera, he will lose some information, but by a skillful choice he will be able to show a large number of important features of the building.” p. 259

Standard

# I love the animation package in R!

I am grateful that we live in an age in which individuals like Yihui Xie make things that help others make things. Aside from his extraordinary work with knitr, I am enjoying the ability to make any kind of animation I want with his animation package. My introductory stats lectures are less boring when I can make simple graphs move:

Correlations

Code:

 library(Cairo)
library(animation)
library(mvtnorm)
n <- 1000  #Sample size
d <- rmvnorm(n, mean = c(0, 0), sigma = diag(2))  #Uncorrelated z-scores
colnames(d) <- c("X", "Y")  #Variable names
m <- c(100, 100)  #Variable means
s <- c(15, 15)  #Variable standard deviations
cnt <- 1000  #Counter variable
for (i in c(-0.9999, -0.999, seq(-0.995, 0.995, 0.005), 0.999, 0.9999,
0.999, seq(0.995, -0.995, -0.005), -0.999)) {
Cairo(file = paste0("S", cnt, ".png"), bg = "white", width = 700,
height = 700)  #Save to file using Cairo device
cnt <- cnt + 1  #Increment counter
rho <- i  #Correlation coefficient
XY <- matrix(rep(1, 2 * n), nrow = n) %*% diag(m) + d %*% chol(matrix(c(1,
rho, rho, 1), nrow = 2)) %*% diag(s)  #Make uncorrelated data become correlated
plot(XY, pch = 16, col = rgb(0, 0.12, 0.88, alpha = 0.3), ylab = "Y",
xlab = "X", xlim = c(40, 160), ylim = c(40, 160), axes = F,
main = bquote(rho == .(format(round(rho, 2), nsmall = 2))))  #plot data
lines(c(40, 160), (c(40, 160) - 100) * rho + 100, col = "firebrick2")  #Plot regression line
axis(1)  #Plot X axis
axis(2)  #Plot Y axis
dev.off()  #Finish plotting
}
ani.options(convert = shQuote("C:/Program Files/ImageMagick-6.8.8-Q16/convert.exe"),
ani.width = 700, ani.height = 800, interval = 0.05, ani.dev = "png",
ani.type = "png")  #Animation options
im.convert("S*.png", output = "CorrelationAnimation.gif", extra.opts = "-dispose Background",
clean = T)  #Make animated .gif
Standard