Part II — Computer Simulations · R

Generating Data — True Score Theory

Download the complete R script for this exercise to run it in RStudio.

⇓ Download R Script

Overview

In this simulation you will generate two imaginary test scores for 500 individuals using a measurement model called True Score Theory. Imagine that you have administered two achievement tests to 500 school children. True Score Theory assumes that every observed test score is composed of two parts — true ability and random error:

O = T + e

Here O is the observed score, T is the latent true ability, and e is random error. In real research you only observe O; you never see T or e directly. By simulating the data yourself, however, you know the true values — which makes it possible to evaluate how well statistical methods recover them.

We will create two tests, X and Y, each measuring the same true ability with independent errors:

X = T + eX Y = T + eY

Both tests share the same true score T, so any difference between a child's X and Y scores must come entirely from measurement error.

Step 1 — Generate True Scores and Errors

The first block generates 500 true scores from a normal distribution with mean 0 and standard deviation 3, and two sets of random errors with mean 0 and standard deviation 1. The true score has a larger standard deviation than the errors, which means each test will reflect more true ability than noise — a situation of reasonably high reliability.

T <- rnorm(500, mean = 0, sd = 3) eX <- rnorm(500, mean = 0, sd = 1) eY <- rnorm(500, mean = 0, sd = 1) SimData <- data.frame(T, eX, eY) View(SimData) describe(T, fast = TRUE) describe(eX, fast = TRUE) describe(eY, fast = TRUE)

Check that the means are close to zero and that sd(T) is about three times larger than sd(eX) and sd(eY). Small deviations from the specified values are expected because the data are randomly generated.

Step 2 — Construct Observed Test Scores

Add the true score to each error term to produce the two observed tests.

X <- T + eX Y <- T + eY ObsData <- data.frame(X, Y) AllData <- data.frame(T, eX, eY, X, Y) describe(ObsData)

The means of X and Y should both be close to zero (the mean of T), and their standard deviations should be slightly larger than sd(T) alone because each observed score contains added error variance.

Step 3 — Examine the Distributions

Plot histograms of each test score. Both should be approximately bell-shaped because they are the sum of two normal variables.

ggplot(AllData, aes(x = X)) + geom_histogram(bins = 30, fill = "#3a7fc1", color = "white") + theme_minimal() + labs(title = "Distribution of X", x = "X Score", y = "Frequency") ggplot(AllData, aes(x = Y)) + geom_histogram(bins = 30, fill = "#3a7fc1", color = "white") + theme_minimal() + labs(title = "Distribution of Y", x = "Y Score", y = "Frequency")

Step 4 — Bivariate Distribution

Plot X against Y to see their joint distribution. Because the two tests share the same true score, you expect a positive correlation. The cloud of points will be elongated diagonally; the width of the cloud reflects the amount of measurement error relative to the true score variance.

ggplot(ObsData, aes(x = X, y = Y)) + geom_point(alpha = 0.4, color = "#3a7fc1") + theme_minimal() + labs(title = "Bivariate Distribution of X and Y")

Step 5 — Regression of Y on X

Fit a simple linear regression of Y on X and examine the slope coefficient. If the two tests measured true ability perfectly (no error), the slope would be exactly 1.0 because X and Y would be identical. With measurement error the slope will be less than 1.0 — a phenomenon called attenuation. The more error in X, the flatter the regression slope.

regressionXY <- lm(Y ~ X, data = ObsData) summary(regressionXY) ggplot(ObsData, aes(x = X, y = Y)) + geom_point(alpha = 0.3, color = "#3a7fc1") + geom_smooth(method = "lm", color = "darkred", se = TRUE) + theme_minimal() + labs(title = "Y Regressed on X") cat("R-squared (estimated reliability):", summary(regressionXY)$r.squared, "\n") cat("True reliability var(T)/var(X): ", var(T) / var(X), "\n")

The R-squared from this regression estimates the reliability of the tests (roughly rXX'). Compare it to the ratio var(T) / var(X) — they should be close.

Step 6 — Residuals Plot

Plot the residuals against the fitted values. If the regression model is appropriate, you expect a random horizontal scatter with no obvious patterns.

ResidData <- data.frame( fitted = fitted(regressionXY), residuals = residuals(regressionXY) ) ggplot(ResidData, aes(x = fitted, y = residuals)) + geom_point(alpha = 0.4) + geom_hline(yintercept = 0, color = "red") + theme_minimal() + labs(title = "Residuals vs. Fitted Values", x = "Fitted", y = "Residuals")

Reflections & Variations

  1. Vary reliability. Change the standard deviations to explore different reliability levels. Try sd(T) = 1, sd(eX) = 3 for very low reliability and sd(T) = 5, sd(eX) = 1 for very high reliability. How does reliability affect the correlation between X and Y and the regression slope?
  2. Non-zero mean. Change mean = 0 for T to mean = 50 to simulate a more realistic test metric. Does the mean of T affect the correlation or regression slope?
  3. More tests. Generate a third test Z <- T + eZ with its own independent error. Compute the correlation matrix among X, Y, and Z. What does the pattern of correlations tell you about the common latent true score?
  4. Attenuation. Compare the slope from lm(T ~ X) to the slope from lm(Y ~ X). Notice that regressing Y on X underestimates the true relationship — measurement error in X attenuates the coefficient toward zero.
← Back to Simulation Home