In much of our teaching, assessment of practical performance is based on ‘snapshot’ views where we tick or grade the particular performance ‘seen’. Even with peer or self-assessment we provide students with set criteria about the performance in question and ask them to make judgements about their own and others performance.
I remember this in my practice when I used the Games Performance Assessment Instrument (GPAI) as a way of supporting students’ peer-assessment. The GPAI is designed to assess decision-making and skill-execution, one of the reasons I used it within my unit of Teaching Games for Understanding (TGfU). The way I used the tool was by asking students to ‘tick’ the criteria on the sheet when they had observed their partner or teammate perform a particular action. However, as a colleague once said to me; it is nearly impossible to assess these things by simply observing someone else because, in short, you have to second guess the decision that you think they made and then decide if the skill was performed as they intended. As I reflect on this unit and the experiences of ‘Adam’, I can see this colleague’s point.
Adam was a low skilled pupil who was struggling with the full-sized racquet and the high compression tennis ball his team had insisted that he use. He was performing as best as he was able but either he didn’t know what decisions to make, couldn’t execute the skill and/or his team couldn’t recognise this, even if he could. The result was marked as incorrect on the GPAI on every single one of his attempts. The effect this had on his behaviour and attitude was telling. I assumed that the assessment would assess learning but instead it assessed second or third hand interpretations of actions.
Adam’s skill or expertise couldn’t stand up to the black and white assessment criteria I had exposed him to and he responded through retaliation (marking his peers as incorrect) and then anger (as he threw the clipboard away, burst into tears and stormed off). This wasn’t his fault (although I blamed him for about five minutes), it was my fault (and I’ve blamed myself ever since that event in the summer of 2004). This assessment wasn’t about development but about meeting expectations and it led me to question a lot about my teaching. I guess what I am trying to say is when we assess what are we actually assessing? Intention is one thing but what is the reality? We can do as much harm as we do good if we get this key aspect wrong. This is the very essence of this week’s paper by Rink and colleagues.
The Paper
Given my recent look at Teaching Games for Understanding in these blogs it seems apt that this week’s paper seeks to challenge some of the expectations we have around tactical understanding/learning. Primarily Rink and colleagues set out to critique the literature around learning and instruction in games teaching and asked challenging questions about how we go about assessing the manner in which our students are developing in their games’ practices.
What stands out – to me - in Rink and colleagues’ paper is the challenge they lay down to educators around what it means to learn in physical education. At the root of the authors concerns was the manner in which ‘learning’ in games is accessed (i.e. is made quantifiable) and the implications of that measurability on the game itself. They were particularly concerned that, in the very measurement of learning (i.e. expertise), the respective expertise of an individual becomes decontextualized, compromised, and ultimately detached from the game itself.
In discussing learning Rink and colleagues suggested that, in physical education this might equate to games expertise. Furthermore, when considering the development of expertise from an educational perspective they suggested that related measurement issues became increasingly significant. Specifically they wrote, “game playing, like other human endeavours, particularly those that are performance related, is probably best understood in terms of issues related to what constitutes expertise and how expertise is developed.”
Working on a continuum Rink and colleagues suggested that expertise (at least the measurement of it) could be understood by knowing “what to do” and then “doing it”. In other words, degrees of expertise could be ‘seen’ in an individual’s response selection (cognitive or declarative knowledge) and response execution (skill or procedural knowledge). This, they argued, was how expertise was accessed in schools and yet it took little account of the difficulty of effectively transferring in game expertise to out of game testing.
To do this they argued that we needed to think more holistically about how expertise and the game were interrelated and acknowledge the limitations of any form of assessment of game expertise. They suggested that we start to think of the “what to do” – “doing it” continuum as being multi-layered: (1) awareness of selection and execution, (2) selection and execution in controlled contexts, and (3) selection and execution during game play.
Awareness of selection and execution was represented as the “ability to bring the knowledge [of selection and response] to conscious awareness and usually verbalize a response in a written or oral fashion.” Put more simply it was asking pupils what they would do in a given situation.
Selection and execution in controlled contexts utilized skills test to “control the context of performance.” It allows for the simulation of in-game scenarios and maintains some of the contextual factors of the game while still allowing for some degree of assessment to occur.
Selection and execution during game play represents, the authors argued, some of the more complex aspects of decision-making and skill execution but these were still limited aspects. They claimed that instead of measuring the actual event these tests only measured the accuracy of the decision made i.e. was it what the performer would have done and did it succeed.
In light of these points raised by Rink and colleagues I ask you to reflect and consider how we assess. When we develop an assessment tool for use by ourselves or by our students, we need to ask is it really assessing what we intend it to. We also need to consider student learning, if we are promoting decision-making in lessons, then we need to consider how best this can be achieved, where tick boxes or measures of performance may not always be appropriate. So, the next time you set up a skills test ask yourself “what I am teaching my students to do?
What’s next? As part of this series of blogs I propose the following as a way of considering the implications of this research on your teaching - Think, Act, Change (or TAC for short).
Think about findings of the paper – do they resonate with you? Use Twitter (@DrAshCasey) to ask a question, seek clarification, maybe challenge the findings.
Act on what you’ve read. What do you believe? Is it your responsibility to make changes or is this just something else that I’ve put on your plate? Is there action to take? If so, what might it be?
Change what you do in response to your thoughts and actions? Is this a personal undertaking? If you want to do something, or are looking for help, then please let the community know about it.
I wouldn’t expect every paper to get beyond the T or even the A of TAC but if one paper resonates enough to get to C then hopefully all this is worthwhile. Good luck.
Reference
Rink, J.E., French, K.E., & Tjeerdsma, B.L. (1996). Foundations for the learning and instruction of sport and games. Journal of Teaching in Physical Education, 5 (4): 399-417.