Does it do what it’s supposed to? Assessing the assessment

Tick-list

In response to a request for constructive criticism of the English assessment model I helped design, Michael Tidd got in touch to query whether it met his 7 questions you should ask about any new ‘post-levels’ assessment scheme.

For the record, these questions are:

  1. Can it be shared with students?
  2. Is it manageable and useful for teachers?
  3. Will it identify where students are falling behind soon enough?
  4. Will it help shape curriculum and teaching?
  5. Will it provide information that can be shared with parents?
  6. Will it help to track progress across the key stage?
  7. Does it avoid making meaningless sub-divisions?

My initial response was that I was unsure whether it did in fact successfully meet these criteria, and this post is an attempt to think through what might need to be tweaked or jettisoned in order to improve it.

1. Can it be shared with students?

In its current form some of the wording will probably be opaque to most pupils. I’m suspicious of ‘translating’ rubrics into ‘pupil friendly’ language because it’s almost impossible to retain the nuances of meaning from the original. Having said that, the descriptions we settled on were stripped of most of the meaningless adjectives and adverbs that plagued National Curriculum levels and so it ought to be relatively straightforward to communicate to pupils where they are and what they would need to do in order to make progress. What’s more difficult is to determine targets. In the past there was a spurious but at least widely agreed consensus that pupils ought to make at least 2 sub levels of progress per year. This was something that could be easily communicated, if not fully comprehended. But in a post levels system, pupils will need to know what they can do, and what they ought to be able to do next. And in those terms the descriptors we’ve come up with ought to be able to serve that purpose.

2. Is it manageable and useful for teachers?

Screen Shot 2014-04-06 at 05.16.53English teachers have had to contend with assessing pupils’ work using 15 different Assessment Focuses each divided into 8 distinct levels. To say that this was an unwieldy way to mark work is something of an understatement.

These 8 writing AFs, 7 reading AFs and 4 speaking & listening AFs have been slimmed down into 6 ‘organising concepts’ that, we hope, capture the essence of what experts and academics in the field of English do. And these concepts have been divided into six quite distinct ‘levels’ of mastery that should avoid any pointless hairsplitting about whether work is ‘competent’, ‘clear’, ‘confident’ or ‘sophisticated'; it ought to be obvious at a glance whether pupils are ‘working towards’ being able to operate with a concept or able to do so ‘sometimes’. So is it manageable? Yes. But is it helpful? I thinks so. One criticism of this system was that it is ‘vaguer’ than NC levels. While it’s certainly true that our descriptors are generic in nature, I’m not sure that they’re actually any vaguer than what is currently in place, but vagueness may have some advantages in that it might prevent us having too much certainty about making pronouncements on what pupils can and can’t do. My hope though, would be that these generic descriptors will be used to design specific rubrics for each assessment task and will therefore be made as specific as thought necessary to be useful.

3. Will it identify where students are falling behind soon enough?

This may be a weakness. Tidd argues that “NC levels were too broad to be able to identify students who were not making progress, but sub-levels did not link closely enough to the content.” While this system should be able to record meaningful information (rather than just data) about pupils’ progress, it may not be able to do so in sufficient detail. Is it enough to be able to highlight where a pupil is ‘secure’ and where their progress is ‘exceptional’? It’s not clear how these stages intersect with age related expectations and therefore it may not be agile enough to flag up where children are falling behind. This needs further thought and any suggestions would be welcomed.

4. Will it help shape curriculum and teaching?

Well, seeing as this assessment system was the starting point for the curriculum that was subsequently designed, this must be a resounding ‘Yes!’ The programme or study and individual schemes of learning have been put together in order to allow and encourage pupils to criticise and create knowledge and engage dialectically with the content that is to be taught. A potential failing is that our ‘beyond’ category occupies a position above ‘exceptional’ and gives the impression that you can only be critical after mastering a concept. I’m not sure whether this is true and it runs the same risk of putting ‘creativity’ at the top of Bloom’s revised taxonomy. Creativity is dependent on the quality of what you know but making it the apex of a hierarchy of skills suggests that you can only be creative after you have been analytical and evaluative. This may not be a particularly useful message and maybe we could do more to encourage pupils to go ‘beyond’ at an earlier stage in their mastery of a concept?

5. Will it provide information that can be shared with parents?

Yes. The fact that alpha-numeric labels have been stripped away will mean that teachers and parents will be forced to communicate at the level of what pupils can actually do rather than the obfuscating shorthand of trying to encapsulate a child by assigning them a number. As Tidd has pointed out, a system which nails the first of his criteria (communication with pupils) should do well in this category too. I’m of the opinion that anything that makes it too easy to fool ourselves into believing we can absorb information at a glance is problematic; I think we should have to work at trying to understand where are children are and what we need to do to support them to make further progress. But where this system may fall down is in the potential weaknesses highlighted in the response to Question 3: if the stages are not related to age-related expectations, will parents be able to make sense of how their children are performing in comparison to everyone else? And does this matter?

6. Will it help to track progress across the key stage?

Tidd acknowledges that tracking is not the same as assessment, but contends that the ability to track progress is an essential component of a useful assessment system. There are 2 concerns here: the tracking of coverage and the tracking or progress. Firstly. can (and should) an assessment system be used to map out what has been taught? Or is that the job of the curriculum? And secondly, can the assessment system indicate whether expectations are being met annually and across a key stage? This brings us back to the issue highlight above – because I’m not sure how well our system captures age-related expectations, I’m not sure how well it can be used to track progress. Certainly it can be used to track what pupils can currently do, but what does this actually tells us about what they should be able to do?

7. Does it avoid making meaningless sub-divisions?

The fact that the content of English can rarely be reduced to a binary yes/no assessment of understanding and progress means that assessment design has always focused on the ‘skills’ on the subject. The foundational knowledge of grammar might be usefully assessed in this way but most of what pupils know in English is specific to very narrow contexts: knowing the plot of Oliver Twist will not help you understand the plot of other novels, but having a general understanding of how plot development works might. Michael Young discusses in Bringing Knowledge Back In the need to teach ‘powerful’ knowledge over ‘context dependent’ knowledge. Most knowledge that is taught, Young argues, is limited to being procedural and skills based; it deals with specifics and doesn’t allow children to make the generalisations and judgements that come with acquiring ‘powerful’ or context-independent knowledge.

Screen Shot 2014-04-06 at 06.11.33

For these reasons, our assessment system has been organised around the powerful, organising concepts that underpin mastery of reading and writing. The subdivisions we’ve made within these concepts have, I hope, real meaning and will be useful.

On balance, I think this systems stands up pretty well although I’m sure there are many other useful tests our assessment system could undergo in order to tease out its fitness for purpose. There is always the caveat that it’s very hard to spot self-deception on your own, so I’d be very grateful for any further feedback or critique. The next step might be to subject any assessment tasks and specific rubrics to Rob Coe’s 47 criteria for a useful assessment.

If you haven’t already read the original post describing the process of design the assessment system and curriculum, please have a look at One step beyond: assessing what we value.

9 Responses to Does it do what it’s supposed to? Assessing the assessment

  1. […] I’ve begun the process of trying to assess the efficacy of this assessment system here. Please feel free to add […]

  2. John Armshaw says:

    Re: point 3. Would it be possible to have benchmark models which show, perhaps 3 ( maybe more) different types of competency for each year group? If you were colouring the boxes these could be visualised as shapes. Students could could be given the task of matching their performance to their target shape. Shaping up! It could provide a personalised map for progess. On the other hand it could be too unwieldy, is that how APP in English worked already?

    • David Didau says:

      Hi John – that’s an intriguing idea. I think worked examples might well be the best way to show pupils, parent & teachers what work at different stages looks like, and this might fit well with your ‘benchmark models’ – am I understanding it correctly?

      I’m not quite sure what you mean by colouring boxes & visualising shapes though? Could you show me an example?

  3. John Armshaw says:

    Hi David- I was talking about students or their teacher colouring in the box in each column of your grid which best describes their competency for that skill. This might give a certain broken pixelated line (shape). You might expect certain students to achieve certain shapes in certain positions on the grid by certain points in the key stage. You might have different expectations of others. I was thinking of how it might be used as a visual benchmark to highlight under performers and provide targets. Changing the subject slightly, I went for a walk with my girls today and we saw two lambs being born, close up. That’s something we won’t forget in a hurry #educational

  4. […] In response to a request for constructive criticism of the English assessment model I helped design, Michael Tidd got in touch to query whether it met his 7 questions you should ask about any new ‘post-levels’ assessment scheme. For the record, these questions are: Can it be shared with students? Is it manageable and useful for teachers?  […]

  5. […] what we value as well as his subsequent blog about the efficacy of our assessment system Does it do what it’s supposed to? Assessing the assessment following the initial […]

  6. […] A useful for heuristic for making sure we’re not barking up the wrong tree when it comes to replacing NC levels which I found invaluable when writing this post. […]

  7. […] organising their curricula, and also in identifying progression across these very large subjects. Does it do what it’s supposed to? Assessing the assessment. In response to a request for constructive criticism of the English assessment model I helped […]

Constructive feedback is always appreciated

%d bloggers like this: