…it is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than by negatives… Francis Bacon

Today’s post has been contributed by a reader who has asked to remain anonymous, but got in touch after reading my blog explaining why I’d abandoned the SOLO taxonomy. Whilst this post isn’t directly related to SOLO, it does address the need to provide compelling evidence when we start getting excited about a particular style or approach to teaching. Increasingly I’ve become convinced that one way to increase students’ attainment might be to harness some sort of permanent Hawthorne Effect by telling them that they are the focus of a series of cutting-edge interventions. One problem with this theory was that I couldn’t see how I might come up with endless new strategies to perpetrate. Surely, I reasoned, a condition for producing a Hawthorne Effect must be that the teacher would have to be convinced of the efficacy of the intervention? Well, maybe not. Read on…

My study

My study was fuelled out of my personal lethargy towards yet another initiative, without any convincing evidence that we needed to actually do anything.

Background

At the end of 2004 the latest Inset trend concerned addressing the gender gap, and specifically looked at boys’ under performance at KS3 and KS4. Staff members were presented data that showed a difference in the mean scores for average KS4 points score – with girls achieving a higher mean than boys.  The conclusion was presented to us that this difference mattered and we needed to do something about it.

No ‘proper’ stats were used to quantify the significance of this difference.  So, I went back to the raw data and analysed the impact of the following factors:

  • Gender
  • FSM
  • Originating primary school
  • KS2 english / maths / science results
  • KS3 english / maths / science results
  • Reading age
  • Learner attendance
  • Teacher attendance
  • SEN / MAT / EAL

All had an impact ‘on average’, but the most significant factors were:

  1. Teacher attendance
  2. Pupils’ attendance
  3. KS2 English results

All of which were significant to p=0.001 or less.

Gender and FSM were the least significant factors of those measured.

So armed, I went back to the Headteacher and said (words to the effect) “I’m not putting into place a scheme to address gender differentials in science as other factors are more important.” The Head’s response was to explain that this was not a request and that instruction came from the Director of Eduction in the LEA.

I returned to my desk and wrote this up my findings and sent them to the Director of Education who declined to respond.

Time moved on, and the whole school gender ‘issue’ continued apace, this time supported by ‘evidence’ from Professor Dave Egan (Cardiff University and adviser to Welsh Government).  Perturbed, I contacted Dave Egan to express my worries over all this – only to get his agreement that his data had been taken out of context and he had never proscribed anything so draconian. Sensibly he suggested that teachers should act only if the evidence in your school showed an intervention was necessary.  Sadly this had been lost in translation, and schools were mandated to “have a gender differential policy”.

Move forward to the start of the next academic year.

The science department had conclusive evidence that gender difference was amongst the least important factor impacting our pupils’ performance. Nevertheless we compelled to discuss how we would fix this non-existent problem and implement a solution.  I continued my data exploration and surveyed all the KS4 pupils for:

  • Odd / Even house number
  • Games console ownership
  • Left or right-handed
  • Gender

As anticipated, all four factors when considered as averages had an impact on the outcomes of KS4 results.  In order of significance they were:

  1. Odd / Even house number
  2. Games console ownership
  3. Left or right-handed
  4. Gender

So living in an odd-numbered house had greater impact on your GCSE results than your gender.

I wrote up my findings and presented them to SLT. It was treated as a ‘bit of fun’. One SLT member looked a bit worried and asked, “You’re not seriously expecting us to buy all our students a PS3 are you?”

Inspection was looming and the school needed to demonstrate effective monitoring of data. I was basically told to “wind my neck in” and “play ball”.  In order to show that the school was “research based” and was “putting in place appropriate interventions”, I conceived the following experiment:

Year 7 cohort, 6 form intake.  Mixed ability form group classes.

Classes 1 & 2 taught by teacher A

Classes 3 & 4 taught by teacher B

Class 5 taught by teacher C

Class 6 taught by teacher D

All classes were taught the same curriculum topics at the same time, following the same scheme of work.  Classes 5 & 6 did not take part in the experiment.

Classes 1 and 3 where selected for the ‘intervention’

Classes 2 and 4 where selected to be “control classes” with no interventions

These classes were chosen so that we could keep teaching as consistent as possible.

The intervention consisted of informing the classes that they were “part of an experiment to try out new teaching ideas” and that they would be “monitored closely”. A letter was sent to parents informing them that their child’s class “had been selected to trial a new science scheme of work” and that we would be “updating parents at the end of the study.”

That was it. Nothing else changed between the classes.  The only intervention was telling the learners that there was an intervention.

All the Year 7 pupils were assessed before and after the intervention Pupils in the classes that received the pseudo intervention achieved on average 2 sub levels of progress – whereas the control classes only achieved 1.5 sub levels. This was significant to p=0.005.

Importantly, this intervention was more significant than the gender split that I was expected to ‘do something about’.

Our conclusion was that telling pupils that they were part of an experiment, that they were special and that they were receiving some extra attention produces an impact. So, armed with this wealth of data and interesting evidence, we started the next school year. But did it make a difference to school priorities?

No. The school continued to mount expensive, time-consuming interventions that focussed on gender, FSM and pupils’ levels of literacy and numeracy.

My point (which never seemed to get any traction) concerned proper statistics – especially analysis of variance and significance.  Any measure where you split learners into two groups will always produce a difference between the two groups when you look at the average of the data set.  Only by analysing variance between groups and significance is it possible to determine if this difference is worthy of acting on.  Even obviously meaningless splits such as left/right-handed, odd/even houses or fake strategies will show a difference on average.

It’s a sad reflection on schools and their relationship with data that this exercise could probably be repeated pretty much anywhere and would likely get similar results.