After completing the exercise you are certainly encouraged to play with variations of your own and some that will be suggested. You can ask for a short or long description of any command by typing HINT or HELP followed by the command name. When you encounter a new command you are encouraged to do this. You might also take a moment to see if the command is listed in the MINITAB Handbook.
Now, get into the MINITAB program. If you don't know how to do this, you have to look it up in the MINITAB manual that came with the program. You should see the MINITAB prompt (which looks like this MTB>). Now you are ready to enter the following commands:
MTB> Random 10 C1;You begin by having the computer create or generate 10 random numbers. We want these numbers to be normally distributed (i.e., to come from a "bell-shaped" distribution) with a mean or average of zero and a standard deviation of 1. (Remember that the standard deviation is a measure of the "spread" of scores around the mean). You told the computer to put these ten numbers in variable Cl. To get an idea of what the RANDOM command does type:
SUBC> Normal 0 1.
MTB> Help RandomBefore getting to more serious matters, you should play a little with the ten observations you created. First, print them out to your screen.....
MTB> Print C1.Or get means and standard deviations....
MTB> Describe C1The mean should be near zero and the standard deviation near one. Or, draw a histogram or bar graph....
MTB> Histogram C1Does it look like a bell-shaped curve? Probably not, because you are only dealing with 10 observations. Why don't you start over, this time generating 50 numbers instead of 10....
MTB> Random 50 C1;Notice that you have erased or overwritten the original 10 observations in Cl. If you do....
SUBC> Normal 0 1.
MTB> Print C1all you see are the newest 50 numbers. To describe the data...
MTB> Describe C1Notice that the mean and standard deviation are probably closer to 0 and 1 than was the case with 10 observations. Why?
MTB> Histogram C1This should look a little more like a normal curve than the first time (although it may still look pretty bizarre).
The measurement model we'll use assumes that a test score is made up of two parts - true ability and random error. We can depict the model as:
Here, O is the observed score on a test, T is true ability on that test and eo is random error.
Notice what we're doing here. We will create 500 test scores or Os for two separate tests. In real life this is all we would be given and we would assume that each test score is a reflection of some true ability and random error. We would not (in real life) see the two components on the right side of the equation - we only see the observed score. We'll call our first test the X achievement test, or just plain X. It has the model....
X = T + eXwhich just says that the X test score is assumed to have both true ability and error in measurement. Similarly, we'll call the second test the Y achievement test, or Y, and assume the model....
Y = T + eYNotice that both of our tests are measuring the same construct, for example, achievement. For any given child, we assume this true ability is the same on both tests (i.e., T). Further, we assume that a child gets different scores on X and Y entirely because of the random error on either tests - if the tests both measured achievement perfectly (i.e., without error) both tests would yield the same score for every child. OK, now try the following....
MTB> Random 500 C1;Be sure to enter these exactly as shown. The first command created 500 numbers which we'll call the true scores or T for the 500 imaginary students. The second command generated the 500 random errors for the X test while the final command generated the 500 errors for the Y test. All three (Cl-C3) will have a mean near zero and the true score will have a bigger standard deviation than the two random errors. How do we know that this will be the case? We set it up this way because we wanted to create an X and Y test that were fairly accurate - reflected more true ability than error. Now, name the three variables so you can keep track of them.
SUBC> Normal 0 3.
MTB> Random 500 C2;
SUBC> Normal 0 1.
MTB> Random 500 C3;
SUBC> Normal 0 1.
MTB> Name C1 "true" C2 "x-error" C3 "y-error"Now get descriptive statistics for these three variables....
MTB> Describe C1-C3Note that the means and standard deviations should be close to what you specified. Now construct the X test...
MTB> Add C1 C2 C4.Remember, Cl is the true score and C2 is random error on the X test. You are actually creating 500 new scores by adding together a true score, Cl, and random error, C2. Now, construct the Y test...
MTB> Add C1 C3 C5.
Notice that you use the same true ability, Cl (both tests are assumed to measure the same thing) but a different random error.
It would be worth stopping at this point to think about what you have done. You have been creating imaginary test scores. You have constructed two tests which you labeled X and Y. Both of these imaginary tests measure the same trait because both of them share the same true score. This true score (Cl) reflects the true ability of each child on an imaginary achievement test, for example. In addition, each test has its own random error (C2 for X and C3 for Y). This random error reflects all the situational factors (e.g., bad lighting, not enough sleep the night before, noise in the testing room, lucky guesses, etc.) that can cause a child to score better or worse on the test than his true ability alone would yield. One more word about the scores. Because the true score and error variables were constructed to all have zero means, it should be obvious that the X and Y tests will also have means near zero. This might seem like an unusual kind of test score, but it was done for technical reasons. If you feel more comfortable doing so, you may think of these scores as achievement test scores where a positive value indicates a child who scores above average for his/her age or grade, and a negative score indicates a child who scores below average.
If this were real life, of course, you would not be constructing test scores like this. Instead, you would measure the two sets of scores, X and Y, and would do an analysis of them. You would assume that the two measures have a common true score and independent errors, but you would not see these. Thus, you have generated what we call simulated data. The advantage of using such data is that, unlike with real data, you know how the X and Y tests are constructed because you constructed them. You will see in later simulations that this enables you to test different analysis approaches to see if they give back the results that you put into the data. If the analyses work on simulated data then you might assume that they will also work for real data if the real data meet the assumptions of the measurement model used in the simulations.
Now, pretend that you didn't create the X and Y tests but, rather, that you were given these two sets of test scores and asked to do a simple analysis of them. You might begin by exploring the data to see what it looks like. First, name the two tests....
MTB> Name C4 "X" C5 "Y"Try this command...
MTB> InfoThis just tells you how many variables, what names (if any) and how many observations you have. Now, describe the data....
MTB> Describe C4-C5By the way, you might also try some of the other column operations listed in MINITAB Help. For example....
MTB> Count C4tells you there are 500 observations in C4,
MTB> Sum C4gives the sum,
MTB> Average C4gives the mean (which should be near zero),
MTB> Medi C4gives the median,
MTB> Standard C4gives the standard deviation, and
MTB> Maxi C4give the highest and lowest value in C4.
MTB> Mini C4
Now look at the distributions....
MTB> Histogram C4These should look a lot more like bell-shaped or normal curves than the earlier graphs did. Look at the bivariate relationship between X and Y....
MTB> Histogram C5
MTB> Plot C5 * C4; SUBC> symbol.Notice a few things. You plotted C5 on the vertical axis and C4 on the horizontal. Each point on the graph indicates an X score paired with a Y score. It should be clear that the X and Y tests are positively correlated, that is, higher scores on one test tend to be associated with higher scores on the other. To confirm this, do...
MTB> Correlation C4 C5The correlation should be near .90. You can predict scores on one test using scores on the other. To do this you will use regression analysis. Fit the straight-line regression of Y on X...
MTB>Regress;For now, don't worry about what all the output means (although you might want to start looking at the section on Regression in MINITAB Help). The regression equation describes the best-fitting straight line for the regression of Y on X. You could draw this line on the graph you did earlier. Just substitute some values in for X (try X = O, 1, -1, 2, and -2) and calculate the Y using the equation which the regression analysis gives you. Then plot the X, Y, pairs and you will see that they fall on a straight line. Recall from your high school algebra days that the number immediately to the right of the equal sign is the intercept and tells you where the line hits the Y axis (i.e., when x = 0). The number next to the X variable name is the slope. It is possible to look at a plot of the residuals and the regression line if we use the subcommand form of the regress statement....
SUBC> Response 'Y';
SUBC> Continuous 'X';
SUBC> Terms 'X'.
We have arbitrarily chosen columns C20-C22 to store the residuals, predicted values and coefficients, respectively. The predicted Y value is simply the observed Y minus the residual. The LET command is used to construct the predicted Y value. (Try 'Help regress' to get information about the command). Now, to do a plot of the regression line you plot the predicted values against the X variable....MTB> Regress;
SUBC> Response 'Y';
SUBC> Continuous 'X';
SUBC> Terms 'X';
SUBC> Residuals C20;
SUBC> Coefficients C22.
MTB> Let C21=C5-C20
This is actually a plot of the straight line that you fit with the regression analysis. It doesn't look like a "perfect" straight line because it is done on a line printer and there is rounding error, but it should give you some idea of the type of line that you fit. Now, you can also look at the residuals (i.e., the Y-distance from the fitted regression line to each of the data points). To do this type....MTB> Plot C21 * C4; SUBC> symbol.
MTB> Plot C20 * C4; SUBC> symbol.Notice that the bivariate distribution is circular in shape indicating that the residuals are uncorrelated with the X variable (remember the assumption in regression that these must be uncorrelated?). This graph shows that the regression line fits the data well - there appear to be about as many residuals which are positive (i.e., above the regression line) as negative You might also want to examine the assumption that the residuals are normally distributed. Can you figure out a way to do this?
Now, you should again stop to consider what you have done. In the first part of the exercise you generated two imaginary tests, X and Y. In the second part you did some analyses of these tests. The analyses told you that the means of the tests were near zero, which is no surprise because that's the way you set things up. Similarly, the bivariate graph and the correlation showed you that the two tests were positively related to each other. Again, you set them up to be correlated by including the same true ability score in both tests. Thus, in this first simulation exercise, you have confirmed through simulation that these statistical procedures do tell you something about what is in the data.
It would probably be worth your time to play around with variations on this exercise. This would help familiarize you with MINITAB and with basic simulation ideas. For example, try some of the following....