A quasi-experiment differs from a true experiment in that the independent vari- able is not randomly assigned. It may be that it is a pre-existing characteristic such as gender, nationality, number of siblings or degree subject, in which case the conditions cannot be assigned or manipulated by the experimenter. It may instead be the case that participants cannot be randomly assigned because of ethical reasons, for example, if you were looking at the effectiveness of an alcohol addiction treatment it might be unethical to deny treatment to an alcoholic by assigning them to a control condition (Christensen, 2000), although of course this regularly happens in medical research, presumably because the knowledge gained is deemed to justify the temporary withholding of treatment.

The design of the longitudinal studies in this thesis is quasi-experimental. The hypothesis is that studying mathematics improves students’ reasoning skills to a greater extent than studying other subjects. Students who had already chosen to study mathematics were compared to students who had already chosen to study other subjects. It would be unethical, and practically impossible, to randomly assign people to studying different A levels or degrees and so a quasi- experimental design is the only way to test the hypothesis.

variables, and in particular, confounding extraneous variables, are not properly controlled for. An extraneous variable is a factor other than the independent variable that affects the dependent variable. An extraneous variable becomes confounding when it also systematically varies with the independent variable. If a confounding variable is not controlled for by random assignment to conditions, it can become an alternative explanation for any effects found and this creates a problem for determining causation.

A way to deal with this problem in a quasi-experimental design is to measure and statistically control for any factors that are anticipated to be confounding. In the case of the longitudinal study, the mathematics group may have a higher mean intelligence than the non-maths group (Inglis & Simpson, 2009a), and intelligence may affect reasoning ability and development. In this case, the way to deal with the problem is to measure participants’ intelligence and statistically control for its influence in the analysis.

In a true experiment with random assignment to conditions, causation can be established because no other variables could be creating the effects observed. In a quasi-experiment, it is only possible to establish plausible causation by ruling all alternative hypotheses implausible. Christensen (2000) gave the example of a person who dies immediately after being hit by a car. It is possible that they actually died from a heart attack, independently of being hit by the car, but that is quite implausible so you can reasonably conclude that the collision was the cause of death.

Beyond the problem of causation, there are several other issues associated with quasi-experimental designs. In each of the examples given below, the results of a study would seem to support the hypothesis that an intervention or treatment works. However, there are alternative explanations of the results that have not been ruled out.

In the longitudinal studies reported in Chapters 5 and 6 participants’ reas- oning skills are being measured both before and after the conditions are exper- ienced. One potential outcome of this design identified by Christensen (2000) is that the comparison group does not change in reasoning skills and the math- ematics group does, as demonstrated in Figure 3.1. This would imply that the hypothesis is correct, but there is a potential problem known as selection- maturation.

Selection-maturation means that one group may already be developing faster in the dependent variable than another group, for example, because they are more intelligent. Perhaps mathematics students are more intelligent than those of other subjects, and perhaps high intelligence individuals are on a faster de- velopmental trajectory for reasoning skills than lower intelligence individuals.

Pre-test Post-test Experimental group Control group D ep endent measure

Figure 3.1: Possible quasi-experimental design outcome 1.

to match the groups on the extraneous variable that could be responsible for a selection-maturation effect. In this case, we would need to match the par- ticipants in each group on intelligence scores. This would mean taking parti- cipants from the lower range of the higher scoring group and from the upper range of the lower scoring group in such a way that the selected groups means are equal on the intelligence measure. There is a problem with this solution though: the participants that are at the extremes of their group range of in- telligence scores at pre-test may regress towards the mean of their group by post-test, which could lead us to underestimate the effect of the independent variable.

Another possible solution is to use statistical methods such as analysis of cov- ariance (ANCOVA) that take into account the effect of the confounding variable when determining the results. Van Breukelen (2006) discussed the advantages and disadvantages of using ANCOVA compared to repeated measures analysis of variance (ANOVA) for inferring a treatment effect. The ANCOVA method is to perform an analysis on Time 2 scores with Time 1 scores as a covariate along with the suspected confounding variables, while the repeated-measures ANOVA method involves comparing change-from-baseline in each group with only the confounding variables as covariates. Van Breukelen (2006) argued that in randomised studies both methods are unbiased but ANCOVAs have more power. However, where there is not random assignment to conditions, repeated- measures ANOVAs are less biased because ANCOVAs assume no baseline differ- ence, which cannot be certain in non-randomised designs. Therefore, repeated measures ANOVA with covariates of intelligence and thinking disposition will be used in the longitudinal studies presented in this thesis. A major benefit of this approach as opposed to matching participants is that the sample is not

Pre-test Post-test Experimental group Control group D ep endent measure

Figure 3.2: Possible quasi-experimental design outcome 2.

reduced.

Besides a selection-maturation effect, another potential threat with such a pattern of results is a local history effect. This is where some event affects one group but not the other. Some possible events that could create this problem in the longitudinal study would be if the mathematics group were also more likely to be taking a logic, critical thinking, or perhaps physics course. This is easy to identify by recording all of the subjects that the participants are studying.

Another possible outcome of the design that was identified by Christensen (2000), shown in Figure 3.2, is that both groups change on the dependent meas- ure over time, but the experimental group changes more than the control group. Again, this could be the result of a selection-maturation effect where the exper- imental group are on a faster developmental trajectory than the control group. A third possibility identified by Christensen (2000) is shown in Figure 3.3. In this case the experimental group scores lower than the control group at pre-test, and increases to nearer the level of the control group by post-test. This pattern of results is more likely to occur when the experimental group is a disadvantaged group and the treatment is an intervention designed to help them. For example, an intervention to help dyslexic students improve in their reading speed.

It is not likely that possibility 3 would occur in the case of reasoning ability in mathematics and non-mathematics students, but if it were to occur there would be a danger that the effect was due to a regression towards the mean by the unusually low-scoring experimental group. In this case it would be neces- sary to also track the deprived group’s scores over time in the absence of any intervention. If the scores were consistent over time, it would help to support the conclusion that the improvement in the experimental group was in fact due to the treatment.

Pre-test Post-test Experimental group Control group D ep endent measure

Figure 3.3: Possible quasi-experimental design outcome 3.

The fourth and final possibility identified by Christensen (2000) is that shown is Figure 3.4 where the experimental group’s scores start below the control group’s and finish higher, while the control group scores do not change. In this case the alternative hypotheses that threaten the other possible results are not an issue. Regression towards the mean is not a plausible possibility, and neither is a selection-maturation effect because it is usually the group that scores highest at pre-test that develops fastest.

To conclude, a quasi-experimental design is not ideal, but because parti- cipants cannot ethically or practically be assigned to studying different subjects at A level or degree level, it has to suffice. The main issue is that by not ran- domly assigning participants to conditions, confounding extraneous variables are not ruled out. This means that there may be alternative explanations for any effects found. Unless these alternatives can be statistically controlled for or deemed implausible, it is not possible to establish causation in a quasi- experimental design. In fact, even if all known confounding variables are ruled out, it is still not safe to conclude a causal relationship because there may be un- known confounding variables that are having an influence. Random assignment to conditions is really the only way to avoid any problems of this kind. This does not mean that the findings of quasi-experimental studies are not important or useful, but it means that they must be interpreted very carefully so as not to overstate any relationships found.

In document Advanced mathematics and deductive reasoning skills: testing the Theory of Formal Discipline (Page 69-73)