Kicking the ‘availability’ habit.

We recently conducted a learning walk to observe how the ‘top’ ten students in Year 10 conducted themselves in lessons as compared with the ‘bottom’ 10 students. Top and Bottom were defined as those students making the most, or least, progress against their targets. We thought this was a valid exercise, not least because every one of the top students was a girl, and everyone at the bottom was a boy. ‘Boys Underachievement’ is a dragon still to be slayed at my school.

We did the usual things in the review. We sent out SLT and some middle leaders to spy on these kids across a number of lessons; we asked their teachers to complete round robins on them; we interviewed the students themselves. Our main finding: these boys have unproductive relationships with their teachers. My colleague, leading on this review, is using the data collected towards her MA, so I have no doubt these findings will be contextualised and pulled apart and some new knowledge will emerge. (She has until January to get it done and I, as her supervising tutor, look forward to reading it.) In the meantime, however, the headlines have been reported to SLT, and we have had – for me – a dispiriting discussion about What Must Be Done. Already we have decided to chuck mentors at these boys, to investigate behaviour-relationships training for their teachers, and to delve further down into years 7 and 8 to see if we can’t find similarly at-risk-of-disaffection boys. We have done what I think teachers and schools are all too prone to doing: we have perceived a problem and then thrown a ‘solution’ at it.

As a member of SLT, it is not hard for me to find problems: all I have to do is to pick up the afters of a fight at breaktime; or find a couple of students standing in the corridor, sent there by their exasperated teacher; glance in a detainee’s homework diary and find nothing recorded there; or pick up a ragamuffin’s exercise book and discover it has not been marked in ages. What kind of school is this I’m working in? The kids are violent, the teachers can’t manage behaviour, and as for setting high standards for homework and feedback, forget it. Yet mine is a school which has revolutionised its approach to teacher development in recent years, has embarked on an uncompromising crusade to raise attainment, has lately received a most glowing Ofsted report, and in the summer recorded its highest ever GCSE results, among the best in the borough. How can our outcomes be so impressive, if the evidence of our own eyes indicates we are slackers? It’s because what jumps out at us (the fight, the kid in the corridor, the unmarked book) does not stand for the whole. Stephen Pinker (after Tversky and Kahneman) talks about the ‘availability heuristic’: the rule of thumb which urges us to jump to big conclusions from the small bits of evidence which fall into our laps. We see the one child in the corridor, and not the 25 working wonderfully in the classroom; we spend an afternoon unpicking the fight on the field, and forget there were 300 there playing vigorously, chatting amusingly and – sometimes – scribbling the last bits of their homework in time for third lesson. I’d say we are ‘availability junkies’. The flotsam of a typical day are the bits we notice, not the great ocean that keeps the whole thing afloat.

This can be true, even when we make a special effort to review ourselves more systematically and rigorously. The learning walk I mentioned at the top of this blog was planned and conducted with some care, with a bespoke lesson observation format for all to use and standardised survey questions. When we carry out reviews of faculties and learning in key stages, we first decide upon the questions we want to answer, then devise a mixed approach. I advised my SLT on this method a couple of years ago, to convert mere ‘review’ into something more like research. I think what we do is pretty good, and we have a better idea of what we are really like than ever before. And yet… we are still distracted by the stray child, the unmarked book.

All of this reminds me of the debates which led to, and were then re-sparked by, the ResearchEd conference organised by Tom Bennett and Helene Galdin-O’Shea. My most recent blog, Is Action Research research at all?, responded to several participants of that conference who had varying opinions on that question. My own views are evolving. Now what I think I think is this:

  • When school teams carry out a review which adopts a mixed approach, and can triangulate its findings, this might be called a ‘research exercise’.
  • This ‘research exercise’ is better still if it addresses a precise question.
  • The outcomes of the exercise are more likely to be measurable if you have first understood your starting point. 
  • Incidents one chances upon are not evidence – they feed the ‘availability heuristic’ and nothing more.
  • Action Research, aimed at investigating the impact of a particular intervention in a particular setting, is a valid ‘research exercise’.

I have about 6 colleagues who have agreed to conduct a piece of action research as part of their appraisal this year; I will be supporting them and will blog on their progress after a while.

But, note I have been calling this a ‘research exercise’. I am calling upon semantics to help me out. The Director of the IOE, Chris Husbands, contended at researchED that action research was good CPD but little more. He is a man for whom I have huge respect (he was, many years ago, the man who gave me my PGCE place), so I take what he says seriously. Perhaps one can say that the process of gathering data within a school self-evaluation or action research context is a research exercise, but cannot in the end be termed research per se, because the context is too specific, too local. That, I think, is fair comment (although I would still argue that ‘new knowledge’ can emerge, and can be applied elsewhere at least within the same setting.) Where I fear our own evidence-gathering exercises depart from honest to goodness research is when we try to act upon what we have found. We do all this reviewing, and walk-learning, and action-researching into what has already happened. What we seem incapable of doing is working out the next step: What is to be done now? We suggest a new curriculum without knowing if it is likely to work. We prescribe a mentor – with all the attendant costs – with no evidence to their impact. We do a splurge of homework, not knowing if homework works. Research (proper, academic, peer-reviewed research) has been carried out on many of these questions, and in many cases we have actually read it or are at least aware. But if John Hattie is sceptical of the impact of mentors, or if the NFER doubt the point of TAs, our own instincts tell us otherwise, the anecdotal evidence most available to us tells us otherwise.

So, I have mentioned three levels of evidence-gathering: the academic research that is drawn from wide samples, is peer-reviewed, and that sometimes seeps down to our level; the ‘research exercises’, such as action research and self-assessment reviews we conduct ourselves, in and on our own context; and the incidents that smack us in the face, that scratch our consciousness because they occur, and recur in front of us. The last is the least reliable but the hardest habit to kick. But kick it we should, otherwise we may never work out why boys, in our school, are at the bottom, or work out what best to do about it.

Advertisements

3 thoughts on “Kicking the ‘availability’ habit.

  1. Pingback: Kicking the ‘availability’ habit. | markquinn1968

  2. Interesting blog as ever Mark. One thing that strikes me about your learning walk is the temptation to read correlation for causation (i.e. because children who don’t make progress also have poor relationships with staff does that mean one causes the other, and if so, which way round would that work?). Of course we can also miss the fact that symptoms of underachievement are all related (low rates of progress / disengagement / poor attitude / poor relationships / poor behaviour etc.). One doesn’t explain the other, they are just part of the same problem. I think the most important thing teachers can do in a school context is to make space to really talk to the young people to try to explore their perceptions of their own position and experience. Surely unless we try to listen to them (by which I also mean help them to articulate what they think and thus help them make sense of the complex stuff going on in their lives) we are always just guessing at what to do, and often spending quite a lot of time and effort on new initiatives… Some people have described this kind of work as ‘participatory action research’ and I shudder to think what Chris Husbands would make of that!

    Anna Carlisle has published some interesting thoughts on this: http://www.academia.edu/1312124/Critical_bureaucracy_in_action_Embedding_student_voice_into_school_governance

  3. Lee, thanks for taking to time to read and respond to my blog. Now I know you are reading it, I will have to take greater care. I will have to give some thought to ‘participatory action research’ and thanks for the Anna Carlisle link.
    The ‘correlation trap’ is an aspect of the availability heuristic which I overlooked here. I have found, to my satisfaction, that colleagues get really irritated when I challenge them to distinguish between a correlation and a causation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s