top of page

"Assessing Approaches to Assess Multimodal Projects"


When I was reading Cheryl Ball’s article about assessing multimodal projects, I thought about the ways my school assesses student work in general. We rely on rubrics, criteria for success (CFS) checklists, and exemplars to help guide us on the things we should be looking for when evaluating the work students produce: Did they show their work and follow the steps to complete the math problems? Did they have clear evidence to support their thesis statements? Did they incorporate voice and creativity in the assigned project? Teachers develop some of these evaluative tools and in other situations, the tools come from our curriculum guides. While these evaluative tools tend to emphasize product over process, I have found criteria for success checklists and exemplars to be most useful when observing and monitoring student work in action. I appreciated Ball’s distinction between hyper-prescriptive assessment measures and complete free-spirited explorative activities. As a student and a teacher, I prefer to have, at the very least, a basic roadmap or list of expectations when completing an assignment or task. I like when an assignment does not feel constrictive or nebulous. I also realize that I’m working with several different learning styles and students are not all going to prefer the same methods. As Ball explains, “I am also not suggesting that the classroom become an intellectual free-for-all…I am suggesting that assignments that predetermine goals and narrowly limit the materials, methodologies, and technologies that students employ in service of those goals while ignoring the complex delivery systems through which writing circulates, perpetuate arhetorical, mechanical, one-sided views of production” (285).



The parameters that Ball outlines are all parameters that I would consider using with students because they align with expectations students are already familiar with when completing other assignments. The descriptions bullet-pointed under the conceptual core, research component, and form and content parameters are all things that I would still want to see in multimodal assignments (66). Additionally, just as I would expect students to consider audience and timeliness in non-multimodal assignments, I would expect the same from multimodal assignments. The creative realization parameter, specifically the descriptor “the project must achieve significant goals that could not be realized on paper” is the assessment measure that seems most unique to multimodal projects (66). This parameter is important to me because I definitely want to see students show their creativity in new ways and think outside the alphabetic box. I know, however, that “being able to teach scholarly multimedia requires lots of reassurance for students, lots of reminders that it is not about the finished product but about the trying” (72). Therefore, I would plan to include revision and resubmission as part of the grading process for students to help me understand their thinking processes in relation to their finished products. I thought Ball was pragmatic in noting that even people who devote their research to multimedia scholarship require many cycles of revision and resubmission for their own webtexts in different online journals like Kairos. Seeing a student’s revisions and drafts of an assignment tells a much richer story about the end product.


Another part that I really enjoyed about Ball’s article was her discussion of empowering learners to come up with the ways they are graded or assessed. Giving students a more active role in their learning is definitely an element from Ball’s article that I would implement with my students in my classroom. Empowering learners also happens to be a piece I highlight in my teaching philosophy. This is an important consideration because if I wanted to provide the element of choice in my assignments, I might not have perfect criteria for success or rubrics to match every single idea a student had for conveying learning. As I read Ball’s article, I wondered to myself, “How does a teacher create a rubric for something that has not yet necessarily been created?” All of these thoughts and considerations seem time-intensive and exhausting for me as the teacher! These moments of panic, however, serve as a reminder that students are the classroom and school’s most underutilized resource. If my students have creative visions that they are passionate about and motivated by, then it only makes sense to share authority and give them ownership about their grading and scoring. Ball notes, “The important thing for teachers to remember here is not that Kuhn +2 is the rubric you should use to assess scholarly multimedia or other kinds of digital media, but that the rubric needs to be created fresh, with students, for each kind of project you assign” (68).


Ball’s recommendation for peer-review also resonated with me because it represented another chance to empower students. She states, “it is more important to me that students can assess each others’ work through the peer-review letters they write to each other after their rough draft workshops” (74). Right now, I give the majority of feedback to students. Other forms of feedback come from a self-assessment form (see image below) I designed last year. This form is a way for students to engage in metacognitive processes about their own learning after I teach a lesson. They can also give me direct feedback on how I taught the lesson. What is missing from our classroom, though, are more chances for students to be giving feedback to each other. I realize that I do not do enough of this feedback structure in my teaching as I would like. Perhaps doing more multimodal assignments and activities in class as well as giving students a say-so in their grading criteria will make me more aware about creating peer-to-peer feedback opportunities.



 
 
 

Comments


bottom of page