3 Variance and Error

Variability is an essential characteristic of the natural world. We see it everywhere around us. For example, an extreme case is the human fingerprints, unique to each individual, which makes them useful for identification purposes. Similarly, maybe not as extreme and dependent on what is being studied, research participants are different from each other, differences which introduce variability in the study13 For this reason in laboratory experiments researchers attempt to control as many of the factors that produce variability as possible. On the opposite side are studies that embrace variability and attempt to collect data in the participant’s natural environment.. Due to this variability, the values obtained from measuring a construct differ from research participant to research participant. In classical statistical inference the variance is a measure of how spread out these readings are from the average of the sample.

The Variance is related to Standard Deviation (SD)14 In mathematical terms, the Standard Deviation is the square root of Variance. which is an indication of how much variation of or dispersion is in the values of a sample. A large SD value indicates that the values are spread out from the mean over a wider range of values while a small SD value indicates that the values are close together around the mean.

Total variance can be thought of as the sum of two variances: systematic (between-groups)15 Do not confuse between-group/within-group variance with the between-subjects/within-subjects research designs. See Between-Subjects vs. Within-Subjects Designs for more information. variance and error (within-group) variance. The ratio of the two variances can serve as an indication if the differences between groups are systematic or due to chance.

Systematic (between-groups) variance is the result of the intervention and any additional confounding variables present in the study. It is the intent of an experiment to generate variability in the dependent variable (DV) by manipulating the independent variable (IV). This is the type of variance research is looking for.

Error (within-groups) or non-systematic variance is the unexplained variability in the DV. It is usually more of a nuisance and it can be lived with. It is determined by the random variability between subjects.

Considering that reality is usually described by more than two variables, there are other variables that can affect systematic variance as well. Of these, the variables that influence both the IV and the DV are called confounding variables. Confounding happens when the design of the experiment (controls) makes difficult or impossible to eliminate alternative explanations for an observed cause-effect relationship16 For example you run an experiment that includes the same number of men and women. The treatment is not relevant; what is relevant is how the treatment is applied to the participants. Consider two groups, a treatment group and a control group. If only men are assigned to the control group and only women to the treatment group, when it comes the time to interpret the results, there is no way to know if the observed effects are accurate and are only due to the treatment. In this case, participant’s gender is confounding with the treatment, preventing the researcher to determine if the effects are due to the treatment only or the participant gender has also something to do with it.. In many situations confounding variables are variables that the experiment did not account for. They can cause two major issues: increase variance and introduce bias.

The variability generated by the confounding variables is impossible to separate from the variability due to the intervention, which makes the interpretation of the results difficult or impossible. Therefore, any experimental design should attempt to eliminate any confounding variables and attempt to produce an as small error variance as possible.

The most effective way to control confounding variables is to use random assignment of participants to the experimental groups, which forces all variables other than those studied to create only random (no-systematic) variance. Random assignment has the effect of transferring the variance due to confounding to error variance17 The rationale for transferring variance from the confounding variable(s) to error variance is that, in most cases, it is better to try to deal with error variance than bias in interpreting the results..

On the other hand, the smaller the error variance, the more powerful the design. Therefore, in addition to eliminating any confounding variables, it is recommended to try to reduce as much as possible the variance due to error. Here are a couple of accepted ways of reducing the error variance:

  • If possible, hold constant some of the variables instead of randomizing all variables in the study.
  • Increase the size of the sample as error variance is inversely proportional to the number of degrees of freedom18 Degrees of freedom is an estimate of the number of independent pieces of information that go into computing the estimate. It is the number of values that are free to vary in a data set. For one dataset it is calculated as the number of items in the set minus 1 (n-1). For two samples the degrees of freedom are computed considering that there are two n values, one for each sample, to consider. In this case the number of degrees of freedom is computed as df = n1 + n2 - 2. of the sample.

The concept of variance is closely related to that of error. The following are the most significant sources of error in a quantitative research design:

  • Random error - occurs by chance and can be produced by anything that randomly interferes with measurement;

  • Systematic error - is generated by consistent differences between the measured value and the true value19 For example the measurement with an instrument that has a calibration issue or with a watch that is consistently ahead two minutes.;

Face Validity - a test measures what it is supposed to measure; Predictive or Empirical Validity - ability to predict a relevant behavior; Construct Validity - correlations with other measures is in agreement with the construct being measured.

  • Measurement error - denoting the validity and reliability of an instrument:

    • Validity - the instrument is capable to accurately measure the construct it was designed to measure;

    • Reliability - or reproducibility, is the capacity of the instrument to perform consistently over time and across observers20 That means that when the same instrument is used to measure something twice, the result of the measurement would be approximately the same..

  • Sampling error - different samples generate different results, a fact which needs to be accounted for when making inferences from sample to population. This is measured by the standard error and it may result in:

    • Type I Error - occurs when the null hypothesis is rejected when it is true. The probability of occurrence of this error is called significance level and is denoted by the Greek letter alpha (\(\alpha\));

    • Type II Error - occurs when a false null hypothesis is accepted. The probability of this error not occurring is called power and is denoted by the Greek letter (\(\beta\)).

Sampling error cannot be completely eliminated but it can be reduced by increasing the sample size.