Feedback was the underlying theme to the six weeks of our intensive Module Two for me. There seem to be parallels across the different layers.
In the classes that were being taught for LSAs – did the learners get feedback on what they did from the teacher ? Should they have ? Did it maximise learning ? This came up a lot.
The teachers were getting feedback from peers – was it the right kind ? There is a point about peer observation and observer feedback for R&A assignments being different that I should make clearer somehow.
The teachers were getting feedback from the tutors on LSAs and R&As (and among other things the issue of to what extent they should get feedback on peer observations arose again).
There was a tutor in training on the course who needed feedback on what he was doing.
Having a tutor in training meant I got indirect feedback on many things about the course in a way that I don’t usually, as his perspectives on what was happening were of someone coming from another environment and seeing things as a whole.
Teachers gave feedback on the course (because that is something I always remember to get done on M2 and really have to put in place more systematically on M1 and M3).
Somewhere in the middle of all that I read about a Coursera MOOC not giving their learners the answers in insidehighered. I can see where he is coming from – we rely on people not handing round ‘answers’ on our Module One courses for the same kind of reasons, but the answer still reads uncomfortably – why ? It is well reasoned enough.
Then yesterday, the intensive being finished (mostly), I did the online BULATs examiner training. You are given examples and criteria, you mark them and look at the answers, then you do a batch with no answers and are told if you have been approved as an examiner or not. This morning mails tell me I have been approved as a speaking examiner, but not on the writing. It also says I have 30 days to attempt the writing again, but no feedback on what I did.
My initial reaction to the BULATs mail was that the system was flawed as i needed to know whether I had got all five wrong or one, whether I was consistently marking too hard or too soft or something else altogether. Knowing that would mean I could adjust what I had done, to better meet the standard. When you think about it for a bit longer though, that is actually the quick fix. If I know that I would look at the criteria again, but probably only briefly with an idea that I had already studied them and trying to gauge what it is in my perspective I need to shift. If I don’t know that then I have to look at the criteria again from the bottom up, to try and see what I may have missed or if there is nothing obvious then to try to understand more fully, to look at the scripts more carefully and I guess that will lead to a better understanding and one that will serve as a sounder future base. Does it ?
Is that the question underlying a lot of those situations above ? Are they really about the same thing – the quick fix versus self awareness or not ?
1 classroom feedback
For various reasons in various lessons I was writing ‘think about how to give them feedback’. Sometimes it was just the pressure of classroom management and time and whether or not the learners did actually get to know whether their match was correct or not (as doing it by sticking things on walls or making it into a race or game added to the fun and colour but then left someone with a logistical problem they hadn’t quite worked through). If you have been given eight words and eight pictures to match you need confirmation at some point that you have matched them up right (though you might ask them to check in pairs, check the match of other pairs etc first) so the teacher could hand out a key, though they could also stage that and just point out the ones the pair hasn’t got yet and ask them to rethink the rest. But if they don’t know, they need to know at some point., so you should have that ready to be fed in somehow. But is that feedback ? Or is that information ? Am I mixing two things up here ? Is the feedback just them knowing how many they managed on their own ?
It was less clear cut where learners were doing something that resulted in more general written or spoken output. If you have set things up so they have some new technique to try or some new phrases to deploy, should you always try and round up with some kind of response to what they did (as I was usually asking for a concrete response to be included) ? And then to what extent should that be a response to content and to what extent to the language ? But in a Delta LSA where you are trying to show you are aware of a lot of things in the space of 50 – 60 minutes (often almost miniaturizing what you would be able spread over a 3 or 5 hour sequence of everyday teaching), if there is no response there ought at least to be a rationale to that decision. It should be clear it was a decision made, not a stage forgotten. Once learners get used to a response they look for it. In my classes this last year or two they all snap it with their phones once we are done (how useful that is and whether they ever go back to it being different questions – they think it is useful enough to want a record). That does feel like feedback – it might not include everyone’s output every time, but it is language they have just produced and helps them see what they can change (it often runs to rewrites rather than just grammatical corrections). Without it (or some variation on it) they still have the opportunity to practice / try things, they still might notice things on their own and even with it there is no guarantee they will be able to deploy anything I have pointed out next time.
Would they be more likely to retain / use things they had noticed with no prompting ? Is that the same as me having no feedback on what I did or didn’t get right in the BULATS training ?
2 peer feedback
On M2 teachers do 10 peer observations and on the intensive they are moving round initially watching their own tutor group and then anyone else in the group. The peer observations are designed to get people to focus on classroom management and interactions and use that as leverage to reflect on their own practice. The first couple are read by a tutor just to make sure it is more reflection than description (and to gently direct away from it becoming an evaluation of someone else’s teaching as that is not the goal). The teachers are also encouraged to use each other to provide data for R&A assignments. They should ask others to collect instances of things that they are trying to investigate (usually through self designed charts / questionnaires), but on the intensive course because they know others are watching and writing peer observations they ask for copies of these. Occasionally it works. One teacher was totally unconvinced by what I was suggesting about his instructions, but came back to me in horror with quite a lot of them scripted out (something of an achievement given the length – I’d only written one down) in someone else’s peer observation. So the teacher is getting information (things that did or did not happen). Is that feedback ? The data ? They can work out whether they are happy with the situation or not and adjust their behaviour / practice / ideas accordingly. That fits better with the more usual definition. Would the BULATS equivalent have been to have been told how many I got right ?
3a tutor feedback
The peer observations question comes up again and again. Teachers (understandably) want someone to look at the things they do as part of the course, so when they start to upload the peer observations they expect a response and we have found that if we don’t look, they (often) don’t do them. But the peer observations are for the teacher (as part of the process), not an element of the course that needs to be graded or evaluated. They are about using different tools to generate data for reflection and to feed into the reflection and action assignment. The idea of the R&A (and the PDA and even of Delta as a whole) is that a teacher should be setting themselves up as a life-long reflective practitioner, so they will (in a perfect world) continue to look at aspects of their practice because they are interested in them (as opposed to because someone will grade it or it is required). So our current approach is to look at the first three just to make sure they are off in the right kind of direction, chat in comments about things that are interesting and then say that they should finish them but we are not going to look at any more. In fact the chat is enjoyable, but asking questions sometimes leads to someone asking if they should rewrite (rather than as is the intention, just thinking through a question further). So the extent to which we get across the message that these are about working out your own questions (and that that is actually what the course is about, rather than finding ‘set’ answers) varies. But perhaps we are all guilty of wanting quick answers to readjust so we are ‘right’ (someone else I read who was reacting to the MOOC answers story even linked to Monty Python on this) – that was my initial reaction to the BULATs mail. And can I say there is no such thing as ‘right’ – if someone simply describes the lesson or is judgmental of teacher or learners in an early peer observation, I’d try to steer them to where I think they should be. Does that mean there is a ‘right’ version, or only that there are versions that are not useful for that task at that time ?
3b Tutor feedback on assignments.
To what extent is if feedback and to what extent is it evaluation ? We call it feedback, but is that more a matter of habit than accurate labelling ? It should make it clear what they have done effectively and what they need to change with regard to meeting the criteria, so it can’t be just data about what is (‘you talked a lot’ – is that a good thing or a bad thing ? depends on why ? when ? what about ?) it has to be what was and why it did or didn’t help them meet criteria e.g. ‘You exploit contributions from the learners well, getting Lara to answer her own question about whether she should use the.’ So if feedback is data that helps someone change a process or performance for the better, does showing them what they did to contribute to a good performance (or at least faster and more effective learning outcomes for their students) help speed up the impact of feedback ? Some of those early lessons had no feedback after an output stage – would the teachers have gained as much as fast if there had been no tutor report on the lesson? (at least the BULATS mail told me I had been unsuccessful, even if not how). But in the course feedback the one thing that they are usually unanimous about is that they valued the feedback on assignments, so even if it were not a high stakes course, I’m guessing they would want some kind of response / reaction and not to be told that they would reach their own conclusions in time. One comment about feedback being too prescriptive stood out this time (in so far as no one has ever said that before), but as is very often the case was neatly balanced by one that asked for it to be more so (which does get said occasionally). In both cases though, without examples of what provoked them it is hard to know whether it was a matter of not catering to personal preferences, mismatched expectations or an incident we could have handled better. That in itself though, comes back to the same question though – just being told something is good or is bad is not as fast a fix.
4 Feedback for the tutor in training. I realised as we went on that he got direct feedback on the sessions he delivered and on some of the reports he wrote, but with early Delta 5as I had said write it, then look at what you wrote and what I (or whichever tutor is concerned) wrote and write a quick summary of what you can see is the same and what is different – what that makes you think about your marking. So that really is do it yourself feedback (though I would have been happy to see the actual marks given on the BULATs test as that would have allowed me to re gauge my stance and that seems equivalent). The first time he showed me one of these reflections rather than just the report he had written I wondered if I should be responding to that (in that a person wants a response), but decided that seemed to be heading off down a rabbit hole of reflection and response that would be never ending. Though it did make me think that the programme should not only consist of tasks, but also make it clear what response could be expected to those tasks (or when one shouldn’t be expected).
5 Feedback from the tutor in training. This came out like teachers and peer observations. He wasn’t evaluating the course (well at times he was in a way, but it wasn’t his primary purpose), but he was documenting a lot of what happened in a fairly objective way, which makes for interesting reading and makes me even more determined to come up with some kind of crowd source platform for us to work on session content all together. It works as it is, but I think it could work better if we were all more aware of and invested in it as a whole, so the course is everyone’s rather than some sessions / areas seeming to be more the provenance of someone. I want an on line resource that lets me link (and others edit and contribute) across Modules and syllabuses kind of like the 3D chess they used to play on Star Trek …
6 Feedback from the teachers is usually a patchwork of things I can do something about and are interesting suggestions but I’m still brooding on, things that are not in my hands (‘the course asks for too much’), or things I know about and have on the ‘to do’ list (rewrite peer observations, especially for the intensive version of the course). But M2 is the only course where we have the feedback set up to be gathered systematically. It always throws up something useful and I have to do more about putting on line versions in place for M1 and M3 to trawl for what people found useful / would like more of more systematically there.
In conclusion ?
Not giving feedback could be a valid option and possibly leads to the greatest learning curve with a self motivated learner with the time to go that route, but all of us look for a response, so if we aren’t going to get one, we should at least get a reason why not …
Feedback can just be a question of saying something is right / wrong / good / bad or it can be data (so telling someone what happened or even that something happened is also feedback), but feedback that helps the person change the performance of process faster needs to know what the expected outcome was and refer specifically to how behaviours did or didn’t contribute to it to be most effective. And like everything else I start to explore here it has turned out to be a much bigger question than I first thought it was.