Although everyone’s education is important, the education of disadvantaged students is, arguably, of much greater importance than that of students from more advantaged backgrounds. The more privileged your background, the less it’s likely to matter what happens at school. Conversely, the more socially disadvantaged your background, the greater the impact of what does, or does not happen at school.Sadly though, access to education is more than likely to experience a Matthew effect. Those who have the best chance in life are the most likely to get a great education. That being the case, it seems reasonable to suggest that whilst all children deserve that the decision taken by teachers and school leaders is rooted in the evidence of what’s most likely to lead to increases in academic progress, this is more urgent for the ‘have nots’.

With this is in mind I read John Tomsett’s latest blog on how best to support disadvantaged students with great interest. I always learn a lot from reading John’s thoughts, even if I occasionally disagree. John cites the Sutton Trust Report, Improving the impact of teachers on student achievement, which makes this statement:

The effects of high-quality teaching are especially significant for pupils from disadvantaged backgrounds: over a school year, these pupils gain 1.5 years’ worth of learning with very effective teachers, compared with 0.5 years with poorly performing teachers. In other words, for poor pupils the difference between a good teacher and a bad teacher is a whole year’s learning.

Powerful stuff.

He then quotes Sir Kevan Collins, CEO of the Education Endowment Foundation as saying, “If you’re not using evidence, you must be using prejudice…” I found myself feeling rather envious of this fabulously pithy aphorism, but then reflected that it might be more ‘truthy‘ than true. Of course he’s entirely correct to say that prejudice is the opposite of evidence, but then almost everyone justifies their position with some sort of evidence.

There are two main issues here. Firstly, not all evidence is equal. It’s no good ‘using evidence’, the question should always be, what evidence? As ED Hirsch Jr. has said,

Almost every educational practice that has ever been pursued has been supported with data by somebody. I don’t know a single failed policy, ranging from the naturalistic teaching of reading, to the open classroom, to the teaching of abstract set-theory in thrid-grade math class that hasn’t been research-based. Experts have advocated almost every conceivable practice short of inflicting permanent bodily harm.

The EEF’s Pupil Premium Toolkit has attempted to summarise the research around various interventions o which school leaders might be tempted to lavish time and resources, and this has probably helped the education to become more evidence informed. However, there are some real issues with the picture they present. First, it relies to heavily on aggregating meta analyses without accounting for the real problems with taking such an approach. I’ve outlined my reservations here. Second, the picture is partial. Some of the most robust, well-replicated findings from cognitive psychology (like the spacing effect and retrieval practice) are entirely absent. For schools looking to increase the chances of their most disadvantaged students there a few more productive avenues to explore. Feedback – the most highly rated intervention in the toolkit – will, we’re told result in +8 months’ worth of progress per year of instruction. But compared with what? Not giving feedback? I’m sure I’m not alone when I say I have never encounter a teacher who does not give feedback! I’ve explored further troubling issues with the assumption that feedback is always positive here.

Worse, the Toolkit offers succour for those who think Learning Styles might be a good bet, suggesting it’s likely to result in an additional two months worth of progress for every year’s worth of instruction. If you delve a little deeper, you’ll find that actually this is based on a median effect size of 0.13 – well bellow Hattie’s hinge point of 0.4. And further, we see that this effect size in only as high as 0.13 because of the findings of an unpublished piece of PhD research!

This is an embarrassment! Surely we could come up with a more sensible approach to informing teachers about the best way to support disadvantaged children?

The second point about ‘using evidence vs using prejudice’ is that I know for a fact that there are very many people who, given the same source of evidence as me, will arrive at entirely different conclusions. The problem isn’t either using evidence or using prejudice, it’s that everyone interprets evidence according to their prejudices. I’ve outlined the problems with cognitive bias here, but for a more extensive analysis, you might find my book What If Everything You Knew About Education Was Wrong? useful.

Those at the EEF are as prone to prejudice as anyone. Take the example of the decision to fund further research in Philosophy for Children. There are serious misgivings with the way the trials so far have been run and Greg Ashman makes the point that some at the EEF are overly attached to the perceived benefits of ‘meta-cognitive strategies’.

This is an excessively broad category of teaching interventions aimed at increasing thinking about learning. Reading comprehension tricks fall into this group and such strategies seem ripe for expectation effects (e.g. the placebo effect).

Perhaps this has led to an unconscious bias at EEF headquarters. During [Jonathan] Sharples’ presentation he suggested that the results from trials of meta-cognitive strategies were consistently positive with a similar effect size. Yet he seems to have forgotten about the recent EEF trial of cognitive acceleration, a meta-cognitive approach to science lessons. This trial found no effect for cognitive acceleration.

Of course I agree that evidence is vital in our efforts to transform the life chances of disadvantaged children but I’m not sure we can confidently conclude, as John does, that “we have to improve the quality of teaching in our schools: it is the only thing that matters.” Really? The only thing? Doesn’t it matter what’s being taught and how it’s being assessed? What about the quality of a school’s behaviour systems and pastoral care? Obviously the quality of teaching in schools is crucial, but what’s the point in teachers doing a fabulous job of teaching something rubbish? I’ve argued before that what we teach trumps how we teach. Charitably, we should probably conclude that by ‘quality of teaching’ John means to include these other things, but there will inevitably be some readers who conclude otherwise.

Maybe this suggests we need more or better evidence about what might constitute the best curriculum provision for disadvantaged children. But what I do know is that currently the EEF is as much part of the problem as it is part of the solution. It’s worth considering whether decisions about the education of the most disadvantaged is too important to leave to the prejudices of an ideologically driven, unaccountable clearinghouse who decide both what research to fund and what make available as part of a Toolkit.

John includes a quotation from the economist, Thomas Sowell’s book, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy“It is so easy to be wrong-and to persist in being wrong-when the costs of being wrong are paid by others.” Quite so.