comparative judgement

Making a mockery of marking: The new GCSE English Language mocks

2016-12-05T13:38:59+00:00December 5th, 2016|assessment|

The following is a guest post from the mastermind of Comparative Judgement, Dr Chris Wheadon. The marking of English Language is likely to be extremely challenging this year. English Language has long form answer questions, typically with 8, 16 and 24 mark responses. Ofqual’s research suggests the following range of precision is normal across GCSE and A level: 8 mark items: +/- 3 marks 16 mark items: +/- 4 marks 24 mark items: +/- 6 marks So, when an 8 mark item is marked, for the same response, it is normal for one marker to give 4 marks, while another will give 7 [...]

Go Compare!

2016-09-09T20:54:35+01:00September 9th, 2016|assessment|

Another one from Teach Secondary, this one from their assessment special. This time it's an over view of Comparative Judgement. Human beings are exceptionally poor at judging the quality of a thing on its own. We generally know whether we like something but we struggle to accurately evaluate just how good or bad a thing is. It’s much easier for us to compare two things and weigh up the similarities and differences. This means we are often unaware of what a ‘correct’ judgement might be and are easily influenced by extraneous suggestions. This is compounded by the fact that we aren’t [...]

10 Misconceptions about Comparative Judgement

2016-07-07T17:05:02+01:00July 7th, 2016|assessment|

I've been writing enthusiastically about Comparative Judgement to assess children's performance for some months now. Some people though are understandably suspicious of the idea. That's pretty normal. As a species we tend to be suspicious of anything unfamiliar and like stuff we've seen before. When something new comes along there will always be those who get over excited and curmudgeons who suck their teeth and shake their heads. Scepticism is healthy. Here are a few of the criticisms I've seen of comparative judgement: It's not accurate. Ranking children is cruel and unfair. It produces data which says whether a child has passed or [...]

Proof of progress Part 3

2016-12-06T09:34:06+00:00July 6th, 2016|assessment|

Who's better at judging? PhDs or teachers? In Part 1 of this series I described how Comparative Judgement works and the process of designing an assessment to test Year 5 students' writing ability. Then in Part 2 I outlined the process of judging these scripts and the results they generated. In this post I'm going to draw some tentative conclusions about the differences between the ways teachers approach students' work and the way other experts might do so. After taking part in judging scripts with teachers, my suspicion was that teachers’ judgements might be warped by the long habit of relying on rubrics [...]

Proof of progress Part 2

2016-07-06T22:04:47+01:00March 11th, 2016|assessment|

Back in January I described the comparative judgement trial that we were undertaking at Swindon Academy in collaboration with Chris Wheadon and his shiny, new Proof of Progress system. Today, Chris met with our KS2 team and several brave volunteers from the secondary English faculty to judge the completed scripts our Year 5 students had written. Chris began proceedings by briefly describing the process and explaining that we should aim to make a judgements every 20 seconds or so. The process really couldn't be simpler: the system displays two scripts at a time and you just have to judge which one you think is [...]

Proof of progress – Part 1

2016-03-10T23:03:10+00:00January 30th, 2016|assessment|

Measuring progress is a big deal. I've written before about the many and various ways we get assessment wrong but, increasingly, I'm becoming convinced there are some ways we might get it right. As regular readers will know, I'm interested in the potential of comparative judgement (CJ) and have written about it here and here. Greg Ashman mentions the process obliquely in his new book: When we measure on an absolute scale using a set of criteria, we introduce the possibility of all students scoring 9 or 10 out of 10, particularly if we have trained them well. However, what is really of [...]

Rubrics warp teaching and assessment

2017-08-16T02:35:38+01:00December 11th, 2015|writing|

Men are more apt to be mistaken in their generalizations than in their particular observations. Machiavelli In a recent blog post, children's author, Michael Rosen has suggested how teachers should teach, assess and share students' writing. He has helpfully broken his thoughts into three areas: teaching & assessment, editing, and sharing. In this post, I'm going to consider his ideas on the teaching and assessment of 'good writing'. Rosen points out that schools teach children to write for exams and that writing for exams is not the same thing as writing well. This is, of course, true; we teach what's assessed and [...]

Go to Top