As I’ve been observing students in Survivalcraft this week and thinking of ways to improve my diffi-tool (Student Resources Wiki Page), I’ve been reminded about why using MinecraftEdu is a useful tool for engaging and assessing students. It’s not simply because they are using a game but because there are many opportunities for formative assessment and for students there is a complex but tangible and achievable task to complete within each scenario. Willis (2011) points out that video games provide that “achievable challenge that the player can reach with practice, effort, and perseverance”, something that we clearly see in Survivalcraft. She also applies the elements of games in motivating students, that by “using achievable challenge, motivating goals & feedback about incremental progress in the classroom, with the scaffolding provided for support, — students are motivated to strategically build mastery”.
One of our Givercraft teachers expressed interest in advocating for a MinecraftEdu server after seeing how motivated and engaged the students were during the experience. For my diffi-unit, I am writing a unit plan on Matthew Hensen: The Arctic Explorer, for their 6th grade ELA class. I wanted to help the teacher demonstrate how the game can be integrated into a current unit plan and much of the content I am using came from this teacher. Incorporating differentiation has helped me to think specifically about how I plan to use both formative and summative assessments; to briefly answer the questions posed to us this week, my assessment tools are standards-based, open-response, and both high and low stakes measurements of student progress and learning.
I have three main strategies for using both formative and summative assessments to enhance (or at least not interfere with) intrinsic motivation. First, I try to focus on the “Big Idea” or the Standard that needs to be met and how I can present that in a way that is engaging and will enhance students’ intrinsic motivation. It helps me when I think about that big idea in a real-life context; when that big idea is applied, how can we determine that the standard is being met? Planning lessons with tools such as the UbD template helps me to focus on the big idea and think about the contexts in which that standard will be met. Kohn’s (2007) examination of the contexts around cheating reinforces this,
“Grades, however, are just the most common manifestation of a broader tendency on the part of schools to value product more than process, results more than discovery, achievement more than learning. If students are led to focus on how well they’re doing more than on what they’re doing, they may do whatever they think is necessary to make it look as though they’re succeeding.”
Second, I focus on how to incorporate student voice and choice in determining what formative and summative assessments are effective and meaningful in meeting the Standard(s) or in “getting the big idea”. A fundamental belief that I have about learning and assessments is that they are not “for the teacher”, but rather they belong to the student. I don’t simply view them as separate responsibilities – that a student’s role is to learn and that mine is to assess. I believe that my role is to teach or guide a student not only to learn but to be able to assess their learning in a way that is personally meaningful. If I can help a student to measure their own progress with my guidance, I believe that will support intrinsic motivation. Popham (2014) points out the mistakes often made when using criterion-based assessments and the last two he identifies are important to be aware of when I partner with students to assess their learning – keep it to a “reasonable number” of assessments and use “practical language” in articulating what is being assessed (p. 66). Ultimately, the assessment should inform the student of their progress and ability in meeting the standard and understanding the big idea as well as help me evaluate the effectiveness of my instructional approach. The formative assessments throughout the learning process should be undertaken by both me and the student; this can be formalized or organized to help the student manage their time or it can be reflective to ensure that they are keeping the big idea in mind and using a rubric to measure their progress. A summative assessment should also “circle back” to the big idea, the student should be able to understand how the stepping stones throughout the lesson led them to the big idea.
Students certainly need my guidance in designing formative and summative assessments but I need them to help me understand which of these tools will best meet their needs and learning objectives. Which leads me to my third strategy, differentiation is an important tool in maintaining intrinsic motivation for students. Differentiating the assessments means that I am flexible to what students need and what tools will motivate them in their learning process. I need to provide valid and effective options for assessment and I must be willing to adjust or adapt these options as needed. Tomlinson and Moon (2013) challenge us to consider “error, reliability, validity, and bias” (p. 125) when we use assessment tools and strategies, and they remind us that we aren’t “grading students on different goals” but instead we are giving “feedback and grades based on a student’s status” (p. 126).
In my own professional practice, evaluating and assessing are an important aspect of my approach, but not necessarily grading. In out-of-school time programs, the emphasis is often on outcomes and participants are not graded; their “success” in a particular program could be based on their attendance, behavior, effort, interactions with others, progress, and impact on a group or contributions made in the program. In my own programs, I use similar approaches used in a school classroom but only as they are needed or appropriate for the program goals and participant learning objectives. The Learning in Afterschool project identifies five principles to guide programming that supports meaningful learning: Learning that is (1) active, (2) collaborative, (3) meaningful, (4) supports mastery, (5) and expands horizons. I use these principles in making programming decisions as well as evaluating effectiveness of programs, but also in assessment of participants.
James Popham, W. p. (2014). Criterion-Referenced Measurement: Half a Century Wasted?. Educational Leadership, 71(6), 62-68. Retrieved from: http://egandb.uas.alaska.edu:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eft&AN=94925708&site=ehost-live
Kohn, A. (2008). Who’s Cheating Whom? Phi Delta Kappan. Retrieved from: http://www.alfiekohn.org/article/whos-cheating/ 13 April 2015.
Tomlinson, C.A. and Moon, T. R. (2013) Chapter 6: Assessment, Grading and Differentiation. Assessment and Student Success in a Differentiated Classroom. Alexandria, VA, USA: Association for Supervision & Curriculum Development (ASCD). ProQuest ebrary. Web. Retrieved from: http://egandb.uas.alaska.edu:2081/lib/uasoutheast/reader.action?ppg=135&docID=10774725&tm=1429430592937
Willis, J. (2011). Understanding by Design Meets Neuroscience. Association for Supervision & Curriculum Development (ASCD). http://edge.ascd.org. Retrieved from: http://edge.ascd.org/blogpost/understanding-by-design-meets-neuroscience-judy-willis-md-med