Implementation and process evaluation Overview
Minimum 2: 1 pupil to computer ratio 27
Computers running Scratch 2.0 offline or adequate internet access Minimum 3:1 pupil to computer ratio 3. Coverage Data from Y5 & Y6 teacher Surveys (15 questions relating to 3 modules in each year - 4 investigations plus 1 test per module)
Pupils taught at least some of the core activities across 5 different modules
Pupils taught at least some of the core activities from across 4 different modules
Pupils taught at least some of the core activities from across 3 different modules
4. Time Data from Y5 (Q13) & Y6 (Q19) teacher survey
Time spent on teaching is at
20+ hours in Y5 and at least
12+ hours in Y6.
Time spent on teaching is at least 12+ hours per year.
Time spent on teaching is
less than 12 hours per year. 5. Progression Data from Y5 (Q13) & Y6 (Q19) teacher survey
The order of modules and order of activities are mostly followed in general.
The order of modules and order of activities are mostly followed in general.
The order of modules is mostly followed in general.
Determining levels of implementation and fidelity
In this section, the approach to determining fidelity is considered. It was necessary to composite some of the survey data. This was due to multiple responses by a small number of teachers and also
27 With regard to technology fidelity, the ratio of 2:1 was considered optimum with pupils learning collaboratively
responses by more than one teacher per school in some cases. The term 'composite' is used to refer to the process of combining individual teacher responses into a single school response. This was done by averaging data or where this was not possible or meaningful, for example, in relation to ratio of computers to children, the lower implementation level was used. Further details of fidelity data and how survey responses were analysed and composited are provided in Appendix K and J.
In the cases of attendance and implementation, fidelity was determined by, firstly, considering the overall teacher sample (ignoring the aspect of the attendance fidelity criteria related to in-school teacher-to-teacher professional development). Secondly, for attendance, fidelity is considered in relation to the Y5 (n=35) and Y6 (n=31) composite school-level survey responses. Here both the attendance criteria and the criteria related to in school professional development by an attendee were applied. For other fidelity dimensions, data are considered in relation to the composite school survey respondents to the Y5 and Y6 surveys.
Because overall fidelity is determined by combining Y5 and Y6 fidelity data across all dimensions, it is only possible to determine fidelity for the 27 cases used in the on-treatment analysis.
Data from teacher interviews suggest that where there was variance between classes in schools this was relatively minor, for example, different amounts of time spent on activities or different ways activities were introduced. Thus it is a reasonable assumption that the differences were not substantial. Therefore, for all fidelity dimensions other than attendance it is assumed that individual teacher responses apply to the whole school.
Teacher response, school response and other implementation and process evaluation analysis Above the issue of the need to composite teacher responses to consider implementation levels and fidelity was discussed. For other aspects of the process evaluation such as teacher views of the materials or professional development outcomes or whether teachers taught ScratchMaths in mathematics or other lessons, full teacher level data is reported.
Costs
EEF cost evaluation guidance was supplied to the IoE delivery team who requested their finance department to identify costs for delivery of the intervention, after separating out costs of design, development and research by the IoE ScratchMaths team. Due to difficulties in separating costs for different aspects of the project, SHU calculated costs for models of PD delivery for delivery by two PD leads, as observed based on likely staff and non-pay costs. The method used to calculate costs followed EEF guidance on costs per pupil per year.
The delivery team was also requested to provide exemplifications of delivery costs by local hub providers based on costs of training for control schools for Y5 teachers in 2016/17 as per the waitlist design. These costs would apply to replication in the existing hubs given that the leads were already familiar with the materials and had supported the intervention trainings. If the project was scaled up, there might be additional costs to train professional development facilitators in other areas.
Timeline
The table below provides a timeline of intervention and evaluation activity. Table 7: Timeline
Date Activity Responsibility
Sept 2014 - Apr 2015
ScratchMaths design and set up including work with design schools
ScratchMaths team Aug 2014 -
July 2015
Develop CT test and pilot Evaluation team
Jan 2015 - Mar 2015
Recruit schools to trial ScratchMaths team
May 2015 Randomisation of schools Evaluation team
June 2015 School training begins for Y5 teachers ScratchMaths team July 2015 Process evaluation visits to PD event visits Evaluation team Sept 2015 –
Apr 2016
Delivery to Y5 pupils in intervention schools ScratchMaths team
October 2015 Design second year of materials ScratchMaths team
Nov 2015 - Jan 2016
CT test with independent sample to establish correlation
with KS2 Evaluation team
Feb - Apr 2016
Process evaluation telephone interviews with teachers Evaluation team
May 2016 Testing pupils with CT test Evaluation team
June 2016 Y5 teacher survey Evaluation team
June 2016 Training for Y6 teachers begins in intervention schools. Training for Y5 Wave 2 control teachers begins
ScratchMaths team June 2016 Process evaluation visits to PD events Evaluation team Sept 2016 -
Apr 2017
Delivery to Y6 pupils in intervention schools and Y5 pupils in control schools
ScratchMaths Feb 2017 -
Apr 2017
Process evaluationtelephone interviews with teachers Evaluation team May 2017 -
July 2017
Survey of control and intervention school teachers Evaluation team Nov 2017 -
Jan 2018