Part II — Computer Simulations · R

The Nonequivalent Group Design — Part II

Download the complete R script for this exercise to run it in RStudio.

⇓ Download R Script

Overview

Part I demonstrated that ANCOVA correctly recovers the treatment effect when the pretest covariate is measured without error. In practice, pretests are always measured with some error — they are unreliable to some degree. Measurement error in the covariate causes ANCOVA to under-correct for selection bias, producing a biased estimate of the treatment effect. This is sometimes called the "Lord's Paradox" problem or the errors-in-variables problem.

This exercise shows the bias empirically and then demonstrates the reliability-corrected ANCOVA approach (Porter, 1967; Lord, 1960), which adjusts the regression coefficient for measurement error to recover an unbiased estimate.

Step 1 — Generate Data with Measurement Error in the Covariate

We use the same NEGD setup as Part I, but now the pretest contains measurement error. We also generate a second parallel pretest measure that we will use to estimate reliability.

library(psych) library(ggplot2) T <- rnorm(500, mean = 50, sd = 5) eX <- rnorm(500, mean = 0, sd = 5) # pretest error eX2 <- rnorm(500, mean = 0, sd = 5) # parallel pretest error (for reliability) eY <- rnorm(500, mean = 0, sd = 5) # posttest error RandomAssign <- rnorm(500, mean = 0, sd = 5) Z <- ifelse(RandomAssign > median(RandomAssign), 1, 0) # NEGD with selection advantage X1 <- T + eX + (5 * Z) # pretest (measured with error) X2 <- T + eX2 + (5 * Z) # parallel pretest (same true score, independent error) Ynegd <- T + eY + (5 * Z) + (10 * Z) # posttest: selection + treatment AllData <- data.frame(T, X1, X2, Ynegd, Z)

Step 2 — Standard ANCOVA (Biased)

Run ANCOVA using the fallible pretest X1 as the covariate. Because X1 contains measurement error, the covariate does not fully adjust for the true pretest difference between groups. The estimate of the treatment effect will be biased upward — ANCOVA will attribute some of the selection advantage to the treatment.

ModelANCOVA <- lm(Ynegd ~ Z + X1, data = AllData) summary(ModelANCOVA) cat("Standard ANCOVA estimate of treatment effect:", coef(ModelANCOVA)["Z"], "\n") cat("True treatment effect: 10\n")

The Z coefficient should be noticeably larger than 10 — ANCOVA overestimates the treatment effect because measurement error prevents full adjustment for selection bias.

Step 3 — Estimate Reliability

Estimate the reliability of the pretest using the correlation between the two parallel measures X1 and X2. This correlation is an estimate of the true-score reliability coefficient rXX'.

r_XX <- cor(AllData$X1, AllData$X2) cat("Estimated reliability of pretest:", round(r_XX, 3), "\n") # True reliability = var(T) / var(X1) cat("True reliability var(T)/var(X1):", round(var(T) / var(AllData$X1), 3), "\n")

Step 4 — Reliability-Corrected ANCOVA

The reliability-corrected approach (Lord-Porter correction) adjusts the ANCOVA regression coefficient by dividing by the reliability estimate. First, obtain the raw ANCOVA slope for X1, then correct it upward by dividing by rXX'. The corrected treatment effect estimate is then recalculated from the adjusted slope.

# Reliability-corrected ANCOVA b_X1 <- coef(ModelANCOVA)["X1"] # raw slope for pretest covariate b_Z <- coef(ModelANCOVA)["Z"] # raw treatment estimate b_X1_corrected <- b_X1 / r_XX # corrected slope # Mean pretest difference between groups mean_X1_diff <- mean(AllData$X1[AllData$Z == 1]) - mean(AllData$X1[AllData$Z == 0]) # Corrected treatment effect b_Z_corrected <- b_Z - (b_X1_corrected - b_X1) * mean_X1_diff cat("Raw ANCOVA treatment estimate: ", round(b_Z, 3), "\n") cat("Reliability-corrected estimate: ", round(b_Z_corrected, 3), "\n") cat("True treatment effect: 10\n")

The reliability-corrected estimate should be closer to the true value of 10. The correction will not be perfect — it depends on how accurately we estimated reliability — but it substantially reduces the bias introduced by measurement error.

Step 5 — Compare Estimates

estimates <- data.frame( Method = c("No covariate", "Standard ANCOVA", "Corrected ANCOVA", "True effect"), Estimate = c( mean(AllData$Ynegd[AllData$Z == 1]) - mean(AllData$Ynegd[AllData$Z == 0]), b_Z, b_Z_corrected, 10 ) ) print(estimates)

Reflections & Variations

  1. Vary reliability. Change the standard deviation of eX to alter pretest reliability. With sd(eX) = 1 (high reliability), standard ANCOVA should be nearly unbiased. With sd(eX) = 10 (low reliability), the bias should be much larger. How well does the correction work in each case?
  2. Vary selection bias. Try larger or smaller selection advantages (e.g., 15 * Z or 2 * Z). How does the magnitude of selection interact with reliability to produce bias?
  3. Perfect covariate. Set X1 <- T + 5*Z (no measurement error). Confirm that standard ANCOVA recovers the true effect without any correction.
  4. Estimate reliability differently. Instead of using a parallel test, estimate internal-consistency reliability using split-half correlation. Does a different reliability estimate change the quality of the correction?
← Back to Simulation Home