We recently conducted a learning walk to observe how the ‘top’ ten students in Year 10 conducted themselves in lessons as compared with the ‘bottom’ 10 students. Top and Bottom were defined as those students making the most, or least, progress against their targets. We thought this was a valid exercise, not least because every one of the top students was a girl, and everyone at the bottom was a boy. ‘Boys Underachievement’ is a dragon still to be slayed at my school.
We did the usual things in the review. We sent out SLT and some middle leaders to spy on these kids across a number of lessons; we asked their teachers to complete round robins on them; we interviewed the students themselves. Our main finding: these boys have unproductive relationships with their teachers. My colleague, leading on this review, is using the data collected towards her MA, so I have no doubt these findings will be contextualised and pulled apart and some new knowledge will emerge. (She has until January to get it done and I, as her supervising tutor, look forward to reading it.) In the meantime, however, the headlines have been reported to SLT, and we have had – for me – a dispiriting discussion about What Must Be Done. Already we have decided to chuck mentors at these boys, to investigate behaviour-relationships training for their teachers, and to delve further down into years 7 and 8 to see if we can’t find similarly at-risk-of-disaffection boys. We have done what I think teachers and schools are all too prone to doing: we have perceived a problem and then thrown a ‘solution’ at it.
As a member of SLT, it is not hard for me to find problems: all I have to do is to pick up the afters of a fight at breaktime; or find a couple of students standing in the corridor, sent there by their exasperated teacher; glance in a detainee’s homework diary and find nothing recorded there; or pick up a ragamuffin’s exercise book and discover it has not been marked in ages. What kind of school is this I’m working in? The kids are violent, the teachers can’t manage behaviour, and as for setting high standards for homework and feedback, forget it. Yet mine is a school which has revolutionised its approach to teacher development in recent years, has embarked on an uncompromising crusade to raise attainment, has lately received a most glowing Ofsted report, and in the summer recorded its highest ever GCSE results, among the best in the borough. How can our outcomes be so impressive, if the evidence of our own eyes indicates we are slackers? It’s because what jumps out at us (the fight, the kid in the corridor, the unmarked book) does not stand for the whole. Stephen Pinker (after Tversky and Kahneman) talks about the ‘availability heuristic’: the rule of thumb which urges us to jump to big conclusions from the small bits of evidence which fall into our laps. We see the one child in the corridor, and not the 25 working wonderfully in the classroom; we spend an afternoon unpicking the fight on the field, and forget there were 300 there playing vigorously, chatting amusingly and – sometimes – scribbling the last bits of their homework in time for third lesson. I’d say we are ‘availability junkies’. The flotsam of a typical day are the bits we notice, not the great ocean that keeps the whole thing afloat.
This can be true, even when we make a special effort to review ourselves more systematically and rigorously. The learning walk I mentioned at the top of this blog was planned and conducted with some care, with a bespoke lesson observation format for all to use and standardised survey questions. When we carry out reviews of faculties and learning in key stages, we first decide upon the questions we want to answer, then devise a mixed approach. I advised my SLT on this method a couple of years ago, to convert mere ‘review’ into something more like research. I think what we do is pretty good, and we have a better idea of what we are really like than ever before. And yet… we are still distracted by the stray child, the unmarked book.
All of this reminds me of the debates which led to, and were then re-sparked by, the ResearchEd conference organised by Tom Bennett and Helene Galdin-O’Shea. My most recent blog, Is Action Research research at all?, responded to several participants of that conference who had varying opinions on that question. My own views are evolving. Now what I think I think is this:
- When school teams carry out a review which adopts a mixed approach, and can triangulate its findings, this might be called a ‘research exercise’.
- This ‘research exercise’ is better still if it addresses a precise question.
- The outcomes of the exercise are more likely to be measurable if you have first understood your starting point.
- Incidents one chances upon are not evidence – they feed the ‘availability heuristic’ and nothing more.
- Action Research, aimed at investigating the impact of a particular intervention in a particular setting, is a valid ‘research exercise’.
I have about 6 colleagues who have agreed to conduct a piece of action research as part of their appraisal this year; I will be supporting them and will blog on their progress after a while.
But, note I have been calling this a ‘research exercise’. I am calling upon semantics to help me out. The Director of the IOE, Chris Husbands, contended at researchED that action research was good CPD but little more. He is a man for whom I have huge respect (he was, many years ago, the man who gave me my PGCE place), so I take what he says seriously. Perhaps one can say that the process of gathering data within a school self-evaluation or action research context is a research exercise, but cannot in the end be termed research per se, because the context is too specific, too local. That, I think, is fair comment (although I would still argue that ‘new knowledge’ can emerge, and can be applied elsewhere at least within the same setting.) Where I fear our own evidence-gathering exercises depart from honest to goodness research is when we try to act upon what we have found. We do all this reviewing, and walk-learning, and action-researching into what has already happened. What we seem incapable of doing is working out the next step: What is to be done now? We suggest a new curriculum without knowing if it is likely to work. We prescribe a mentor – with all the attendant costs – with no evidence to their impact. We do a splurge of homework, not knowing if homework works. Research (proper, academic, peer-reviewed research) has been carried out on many of these questions, and in many cases we have actually read it or are at least aware. But if John Hattie is sceptical of the impact of mentors, or if the NFER doubt the point of TAs, our own instincts tell us otherwise, the anecdotal evidence most available to us tells us otherwise.
So, I have mentioned three levels of evidence-gathering: the academic research that is drawn from wide samples, is peer-reviewed, and that sometimes seeps down to our level; the ‘research exercises’, such as action research and self-assessment reviews we conduct ourselves, in and on our own context; and the incidents that smack us in the face, that scratch our consciousness because they occur, and recur in front of us. The last is the least reliable but the hardest habit to kick. But kick it we should, otherwise we may never work out why boys, in our school, are at the bottom, or work out what best to do about it.
I liked this, and shared it with our SEAL coordinator. Cheers.
Following on from a previous article on growth mindset, it appears to be a topic that still holds a great deal of interest. Of particular interest to me is how can Dweck’s ideas be used practically and effectively in a school to develop learning….and is there some way that this could be interweaved and embedded throughout the curriculum? Pete Jones (@Pekabelo) describes some great work he has been doing to develop this here.
So, I turned to Twitter. I asked the good people of the twittersphere for some words that described students with a growth mindset – using the letters G R O W T H. As usual the response was great and a number of people asked if the results could be shared. So here it is……do with it what you will!
Gritty, Grafter, Goal orientated, Galvanised, Grind
Resilient, Reflective, Reliable, Resolute, Receptive, Roused, Reactive, Ready, Responsive
Optimistic, Open, Opportunist, Ownership, Organised, Overcoming, Outward…
View original post 69 more words
How many were there? There must have been 500 and more. How many of them were drawn there by the power of Twitter? Well, surely most of them had been told by the little blue bird that the event was taking place; without doubt, this was a community of Tweachers. Bloggers too, many of them. I am late off the mark with mine (a full 24 hours has elapsed), but I’m calculating that my 244 twitter followers won’t mind too much. This might get read, it might not, but new knowledge will emerge from it, for me if for no one else.
That’s enough, isn’t it? I have learned something, I am writing about it here, something new is now known. A little of this will seep into the consciousness of the others who are kind enough to listen to me when I am advising them, cajoling them (and doing those other verbs that Dr Joseph Spence, Master of Dulwich College, reminded us amount to teaching.) And that will be enough, or at least it will be something.
Or so I thought, before I spent the day at the Research Ed 2013 conference, mustered and mastered by @tombennett71 and @hgaldinoshea (where do they get it from?) Now I understand that stuff arising from research demands a much higher standard before we can call it new knowledge.
Or perhaps I should start again. An event such as #rED2013 (what hashtag did we settle on?) must mean something different for every individual who attended. What were there – 40 odd sessions, divided into a 7-period day? Master Bennett, are you the timetabler in your school? Independent learners that we are, we self-selected what sessions to attend. So what if we sometimes had to sit on the floor? We could hope to nab a seat by avoiding the catnip of @johntomsett, or the @miss_mcinerney honeypot. In the past 24 hours (and indeed, during the event itself, despite the lack of wifi at pricey Dulwich College), attendees have been retelling the event from the evidence of their own experience, reducing it to its gist, extracting its essence, drawing out its strands.
For me, the theme was: Action Research – should we bother? It seems we absolutely should not, or absolutely should, or should but not in absolutely every circumstance. First, Ben Goldacre. I’m a big fan of his Bad Science, and Bad Pharma. I know he’s a charismatic polemicist, and I believe he plays a vital public intellectual role. His advocacy of Randomised Controlled Testing is compelling when applied to pharma; I’m less convinced by his similar advice to the DfE, but I don’t want to be that self-regarding creep (on the slide we didn’t quite see) who naysays before he has quite heard the case. Goldacre, however, dismisses the small-scale work that a teacher-researcher might realistically engage in in their own classroom. He wants research networks of hundreds of schools, where studies can be scaled up. Size matters to Ben, and bigger is clearly better. I wonder how deeply he has considered the differences between a pill and, say, a Physics programme taught across a number of schools. Patients, like pupils, are diverse and will respond to their ‘medicine’ differently. But, within a bottle of tablets, all the tablets are the same: the same cannot be said for the various teachers teaching from that Physics curriculum. Each teacher picks their own way through a course, no matter how standardized the materials they work with, and some teachers are ultimately better than others. So RCTs testing the effectiveness of an intervention in education will always have to allow for that variation in input.
Anyway, Goldacre is a smart guy, brilliant at what he does, and surefooted enough to demolish my little puff at his work. In me, he has nothing to fear. Perhaps even with Carol Davenport on my side. She is from the National Science Learning Centre, and her talk was called Using Action Research to Improve Practice. She defends the little guy, the one examining their own practice in their own classroom. So what if the work is not always objective – it can still be rigorous and honest. She espouses the action research cycle, where an action is analysed, a next step is planned, a research question is decided, research is planned and carried out, its results are analysed and shared. There may be bad research questions, but good ones are specific, focused, measurable and (I would add) interesting and important. Observation, questioning, pre- and post-testing are all data collection methods that – though flawed – are also part of what teachers always do. Teachers call this teaching, but perhaps Carol allows us to call this research.
Chris Husbands @Director_IOE would scoff. For him (pace Lawrence Stenhouse), research is ‘systematic inquiry made public’, or it is not research. Action Inquiry is not sharp enough to cut this mustard. It might be inquiring, but it is essentially unsystematic and rarely of sufficient generalisability to deserve publication. Husbands – a generous man – owns that Action Research may have a place in CPD, and it can change one person’s practice. But research it ain’t.
I admit that Phillipa Cordingley @PhillipaCcuree is too clever for me to understand a lot of what she says, and she says a lot of it very quickly for my slow-turning brain. But I’m pretty sure she thinks teachers can be engaged in something that – at least at the start – could be called action research, and that this matters. But the bar is still set high. Teacher-researchers need to know what is known. They must be clear about what is, and what is not, working. They must have a passion to make the difference for their students. And (there will be a catch here for many) they need peer and expert support, including coaching in research methods. Action Research may be possible, and will be useful, if it has access to specialist expertise, uses the evidence well, and engages the teachers in collecting and reflecting on the evidence in their schools.
What am I to tell those colleagues who don’t tweet, blog or attend geeky conferences on a Saturday? Actually, quite a few of them have engaged in Action Research, gamely claimed new knowledge and have acquired their MAs thereby. Dr Husbands might tell them they have done some worthwhile CPD, and Dr Goldacre would tell them to randomly find 200 friends doing the same thing and he might then be interested in them. But I still hold that, where they have asked a meaningful question, collected their data with care, and analysed how what they have found might be applied elsewhere (even where this might simply be down the corridor in the same school), then, well, they have just done some research.