Back in March I wrote a post called Why AfL might be wrong, and what to do about it based, largely, on Dylan Wiliam’s book Embedded Formative Assessment (If you haven’t already read it, I encourage you to do so as many of the common misconceptions about AfL are specifically addressed). I’m pleased to report that Dylan has taken time out of his hectic schedule to comment on the post and defend the essentials of formative assessment. What follows is, in its entirety, the comment left on the original post.

In his post on “Why AfL might be wrong, and what to do about it” David Didau points out (correctly) that it is impossible to assess what students have learned in an individual lesson. As John Mason once said, “teaching takes place in time, but learning takes place over time” (Griffin, 1989). The ultimate test of any teaching is long-term changes in what students can do (or avoid doing, such as getting pregnant or taking drugs). The problem with such an approach to teaching is that if we wait until we see the long-term evidence, it will be too late. An analogy with automobile manufacturing may be helpful here.

In the 1970s and 1980s, the major American and European car-makers had smart people designing production processes through which vehicles progressed, and then, and the end of the process, the finished product would be inspected to see if it worked properly. The Japanese, building on the work of W. Edwards Deming (an American) realized that it would be far better to build quality into the manufacturing process. If something was wrong with a car, any worker on the production line had the authority to stop the production line to make sure that the problem was fixed before the vehicle moved along the production process. This approach is often described as “building quality in” to the production process, rather than “inspecting quality in” at the final stage. Another way of describing the difference is a move from quality control to quality assurance.

Similarly, in teaching, while we are always interested in the long-run outcomes, the question is whether attending to some of the shorter term outcomes can help us improve the learning for young people.

This is an extraordinarily complex task, because we are trying to construct models of what is happening in a students’ mind when this is not directly observable. Ernst von Glasersfeld described the problem thus:

“Inevitably, that model will be constructed, not out of the child’s conceptual elements, but out of the conceptual elements that are the interviewer’s own. It is in this context that the epistemological principle of fit, rather than match is of crucial importance. Just as cognitive organisms can never compare their conceptual organisations of experience with the structure of an independent objective reality, so the interviewer, experimenter, or teacher can never compare the model he or she has constructed of a child’s conceptualisations with what actually goes on in the child’s head. In the one case as in the other, the best that can be achieved is a model that remains viable within the range of available experience.” (von Glasersfeld, 1987 p. 13)

So I agree with David that we can never be sure that the conclusions we draw about what our students have learned are the right conclusions. This is why my definition of formative assessment does not require that the inferences we make from the evidence of student achievement actually improve student learning—learning is too complex and messy for this ever to be certain. What we can do is increase the odds that we are making the right decision on the basis of evidence rather than hunch.

In terms of the five strategies, I was surprised that David’s post specifically focused on critiquing success criteria. In fact, I actually went back to read what I had written on this in 2011 to check what I had said. Much of the chapter 3 of my book Embedded formative assessment (where I discuss learning intentions) is spent railing against success criteria, and arguing that a shared construct of quality (what Guy Claxton calls a “nose for quality”) is what we should be aiming for, although on those rare occasions when we can spell out the rules for success, we should, of course do so. Michael Polanyi’s work on “Personal knowledge” now over 50 years old, is still the definitive work on this, in my opinion.

In terms of eliciting evidence (which is definitely not just questioning by the way, as I go to considerable lengths to show), then of course we never really know what is happening in the student’s head but I am confident that teaching will be better if the teacher bases their decisions about what to do next on a reasonably accurate model of the students’ thinking. There will also, I suspect, be strong differences across disciplines here. Asking a well-framed question in science may reveal that a student has what Jim Minstrell calls a facet of thinking (DeBarger et al., 2009) that is different from the intended learning—for example, a student may think that a piece of metal left outside on a winter’s night is colder than the wooden table on which it rests, when in fact the temperature of the two are the same (the metal feels colder because it conducts heat away from the hand faster). You may not get rid of the facet of thinking quickly, but knowing that the student thinks this has to be better than not knowing it.

As for feedback, there really is a lot of rot talked about how feedback should and should not be given. People say that feedback should be descriptive, and maybe a lot of the time it should be, but people forget that the only good feedback is that which is acted upon, which is why the only thing that matters is the relationship between the teacher and the student. Every teacher knows that the same feedback given to one student will improve that student’s learning but to another student, of similar achievement, will make that student give up. Teachers need to know their students, so that they know when to push, and they know when to back off. There will be times when it is perfectly appropriate to say to a student that this work really isn’t worth marking because they have “phoned it in” and other times then this would be completely inappropriate. Just as importantly, students need to trust their teachers. If they don’t think the teacher knows what he or she is talking about, or doesn’t have the student’s best interests at heart, the student is unlikely to invest the effort required to take the feedback on board. That is why almost all of the research on feedback is a waste of time—hardly any studies look at the responses cued in the individual recipient by the feedback.

The quote about collaborative/co-operative learning being one of the success stories of educational research comes from Robert Slavin, who has probably done more high-quality work in this area than anyone (Slavin et al., 2003). The problem is that few teachers ensure that the two criteria for collaborative learning are in place: group goals (so that students are working as a group rather than just in a group) and individual accountability (so that any student falling down on the job harms the entire group’s work). [I wrote a post last year on Effective Group Work which makes these points.] And if a teacher chooses to use such techniques, the teacher is still responsible for the quality of teaching provide by peers. As David notes, too often, peer tutoring is just confident students providing poor quality advice to their less confident peers.

Finally, in terms of self-assessment, it is, of course, tragic that in many schools, self-assessment consists entirely of students making judgments on their own confidence that they have learned the intended material. We have over 50 years of research on self-reports that show they cannot be trusted. But there is a huge amount of well-grounded research that shows that helping students improve their self-assessment skills increases achievement. David specifically mentions error-checking, which is obviously important, and my thinking here has been greatly advanced by working (in Scotland) with instrumental music teachers. Most teachers of academic subjects seem to believe that most of the progress made by their students is made when the teacher is present. Instrumental music teachers know this can’t work. The amount of progress a child can make on the violin during a 20 or 30 minute lesson is very small. The real progress comes through practice, and what I have been impressed to see is how much time and care instrumental music teachers take to ensure that their pupils can practice effectively.

So in conclusion, David has certainly provided an effective critique of “assessment for learning” as enacted in government policy, and in many schools, but I don’t see anything here that forces me to reconsider how I think about what I call formative assessment. I remain convinced that as long as teachers reflect on the activities in which they have engaged their students, and what their students have learned as a result, then good things will happen.

References

Claxton, G. L. (1995). What kind of learning does self-assessment drive? Developing a ‘nose’ for quality: comments on Klenowski. Assessment in Education: principles, policy and practice, 2(3), 339-343. [This is behind a pay wall – I’ve been unable to find a pdf]

DeBarger, A. H., Ayala, C. C., Minstrell, J., Kraus, P., & Stanford, T. (2009). Facet-based progressions of student understanding in chemistry. Menlo Park, CA: SRI International.
Griffin, P. (1989). Teaching takes place in time, learning takes place over time. Mathematics Teaching, 12-13.

Polanyi, M. (1958). Personal Knowledge: Towards a Post-Critical Philosophy. London, UK: Routledge & Kegan Paul.

Slavin, R. E., Hurley, E. A., & Chamberlain, A. M. (2003). Cooperative learning and achievement. In W. M. Reynolds & G. J. Miller (Eds.), Handbook of psychology volume 7: Educational psychology (pp. 177-198). Hoboken, NJ: Wiley. [This is behind a pay wall, but this paper, Co-operative Learning: What makes groupwork work? also by Slavin is available as a pdf.

von Glasersfeld, E. (1987). Learning as a constructive activity. In C. Janvier (Ed.), Problems of representation in the teaching and learning of mathematics (pp. 3-17). Hillsdale, NJ: Lawrence Erlbaum Associates.

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, IN: Solution Tree.