The closed circle: Why being wrong is so useful
Lying to ourselves is more deeply ingrained than lying to others.
A closed circle argument is one where there is no possibility of convincing an opponent that they might be wrong. They are right because they’re right.
Imagine you wake to find yourself in a psychiatric ward, deemed by all and sundry to be mad. Any attempt to argue that you are not, in point of fact, mad, is evidence that you are ‘in denial’. Any evidence you cite in support of your sanity is dismissed as an elaborate attempt to buttress your denial. There is no way out of this predicament; no way to demonstrate your sanity that will be accepted by those who have decided they are right because there is no way that they can conceive of being wrong.
If there’s no way in which you can be wrong, then you have created an unfalsifiable argument.
I’ve written a couple of recent posts about falsifiability which might be worth reading as background before getting stuck into this one. Firstly, there’s this: “Works for me!” The problem with teachers’ judgement in which I hold up falsifiability as an antidote for the argument that personal experience trumps empirical data. The crux of my argument is this:
If, in the face of contradictory evidence, we make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please. If we’re told that students’ exam results might improve if we changed our practice we can say things like, “There’s more to education than exam results” and claim that our students are happier, better rounded, or have an excess of some other vague, unmeasurable trait. We can laugh at the idea of measurement and say, “Just because you can’t measure it, doesn’t mean it isn’t important.” We can insulate ourselves from logic and reason and instead trust to faith that we know what’s best for our students and who can prove us wrong?
Then, there’s my last post Is growth mindset pseudoscience? in which I explore Carol Dweck’s attempts to resist the falsification of her theories and question whether her claims are, as a result. “veridically worthless”. She seems to be saying that if research into mindsets doesn’t work then either the teachers, students, or perhaps the researchers didn’t actually have a growth mindset.
I’m dredging all this up because of a couple of interesting comments on that post attacking the worth of falsifiability. Firstly this:
… I’m not sure the use of falsifiability helps or just invites unnecessary questions about the philosophy of science. Hence, I’m all for questioning Dweck’s position but to do so by invoking a contentious and highly problematic theory about the boundary between science and pseudo-science seems unnecessary and distracting. As many great minds have argued we cannot reduce science to the test for falsifiability and to do so is disingenuous, and misleading. Which I’m sure is not your intention, but for a take down on Dweck maybe the philosophy of science should be left alone.
This was a bit of surprise because I didn’t realise falsifiability was contentious. The ‘great minds’ who’ve argued against it include Thomas Kuhn, Imre Lakatos, Paul Feyerabend, Alan Sokal and Jean Bricmont. You can read a brief summary of their various critiques here. Now the great thing about all these arguments is that they neatly sidestep any attempts to say they might be wrong because, you guessed it, they’re unfalsifiable.
Another commenter suggested, “There are areas in which falsifiability is not a viable proposition. We then need to rely on replicable confirmation.” I agree that trying to replicate a test which purports to prove a claim is very useful, but if falsifiability is “not a viable proposition” all we’re left with is uncritically trying to prove things right. And you can prove anything right if you don’t look hard enough. This is crucial because, as physicist Richard Feynman said, “Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”
And then there was this comment:
Falsifiability might be irrelevant if it is regarding something that can vary rather than is always false or always right. In my understanding of Carol Dweck’s (earlier) work some people have a mixture of fixed and growth mindsets or they might feel that some skills are malleable while others aren’t. A theory about learning seems pretty malleable rather than something that can be proven true or false.
Hang on a minute, if a theory about learning is “malleable”, doesn’t that essentially mean that it can mean whatever it wants to mean at any given point? Isn’t it saying that it’s all just a question of interpretation? If you make an empirical claim, then it should be falsifiable. There must come a point when twisting your ideas to fit the facts becomes pseudoscience, otherwise we can all believe whatever the hell we like and damn the evidence. And that would never happen in education, would it!
Yes, it would. Education continues to be a closed circle in which it’s possible to write something like this with absolutely no sense of irony:
Their methods might work in some limited way to be able to ‘pass the test’ and they can pat each other on the back telling everyone about the fab job they are doing. The point they’re missing in their rants about so called ‘new ideas’ is that just passing the ‘tests’ isn’t an education!
We can argue that what we like ‘works’ because we like it. And if it’s unsuccessful on verifiable metrics then the metrics are worthless. This is the apotheosis of a closed circle: you can explain away any amount of disconfirming evidence as not fitting your paradigm. You’ve given yourself permission to ignore reality and anyone who suggests you might not be wearing any clothes can safely be dismissed as having a fixed mindset.
Back to Feynman:
It doesn’t make a difference how beautiful your guess is. It doesn’t make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong.
I’d like to propose an acid test for opinions in education: if you cannot accept that there are conditions in which you might be wrong, then we should feel free to dismiss your ideas as guff.