Collaborative Assessment: Transforming Assessment into a Learning Opportunity
I suspect that many of us think of grading as a one-way transaction in formative assessment. Ideally, grading has the potential to be a source of motivation for students to gauge the degree to which more effort is needed. Practically speaking, students complete the work we assign and we return it to them with some form of acknowledgement, i.e., feedback and a score that indicates we evaluated its quality relative to the assignment’s guidelines. Cumulatively, the roster of grades taken over a semester forms the basis of the student’s holistic success in the course. What we are talking about here is essentially faculty-centered grading. This assessment model presupposes that the quality of the feedback empowers the learner to grow and that the learner correctly interprets that feedback and then initiates an internal self-reflection on that feedback.
Grading can be so much more than a communication device between student and instructor. Emerging literature on assessment points out that the pedagogical shift from student as passive learner to deeper-level active learner is best served when assessment tools follow a similar path. In other words, student-centered assessment, much like learner-centered teaching, offers a way to move students from passive learning to active and engaged learning.
Two years ago I began experimenting with an interactive assessment model. It bothered me that students repeated the same structural and explanatory errors over and over again regardless of how often I would point them out in previous exercises. Since writing is such an essential skill, I decided to review the literature on assessment and was introduced to the idea of minimal marking. Minimal marking argues that when the student not the instructor identifies errors or areas of weakness, learning is reinforced. A second point made by minimal marking is that it is more effective to focus on a few significant errors instead of overwhelming the learner with notes on every little thing. The idea here is to speak to the learner rather than the errors. Interesting, but what was not immediately clear was how to apply these principles. A little more reading led me to the idea that students need to be engaged in the assessment process if they are to move away from passive learning. The rationale is that instead of passively looking at errors that the instructor points out, students should take a more active role of diagnosing and addressing the fault. Essentially, student-centered assessment focuses more on the process of learning than on the product of that learning. The great benefit here is that the learning process is a transferable skill, one that transcends content area.
Synthesizing these two ideas, I developed the Assessment as Collaboration (AC) document, and the Fix-It exercise (FIE). AC document challenges students to diagnose their own error and lays out the rationale for why I will deliver feedback in this collaborative rather than hierarchical/corrective form. It provides them with a key to the editing symbols I use in the margins when noting organizational issues and mechanic errors in their written work. It also explains that content evaluation is indicated in the body of the text.
The FIE provides students with an opportunity to review and correct organizational and mechanical errors. This task requires that they identify the problem I have noted next to a particular line with a symbol in the margin. They correct the problem and return the assignment to me for re-review. This process is labor-intensive both for them and me, but the end result has been very positive reviews and I think more thoughtful writers.
Below describes the mechanics of my deploying the AC & FIE.
- In SOC 2302, a general education course, I focused my assessment on a maximum of 2-3 major errors to avoid student overload. I returned papers and gave them 10 minutes to review. A small, but growing number, typically tried to identify and correct the errors even though I offered no incentive beyond that I generally stay after class and talked with them individually.
- In SOC 3320 & SOC 3342, a writing in the major class, I gave them 15 minutes to identify and correct organizational and mechanical errors. Almost to a person, students stayed until they fixed the problem. True, there was incentive of boosting points. However, this incentive was marginal; generally only 2-3 points per assignment. More rewarding, was that they became better writers by learning how to proofread in a more critical way.
- Student reflections generally indicated that the opportunity to correct mechanical errors and earn back some credit was a hit, even if it was not always easy to spot the error in the sentence.
I’ve come to appreciate that assessing student learning is highly complex task. Practically speaking, the many demands on our time and energy make assessment a context-dependent process. Additionally, to manage the task of assessment efficiently and still make it a useful tool for learning, I have developed a few guidelines for myself.
- Communicate clearly with students. Provide them with a rubric and clearly explain the criteria and standards to which you will hold them for each assignment.
- Listen and observe. Students attach meaning and sometimes even emotional energy to grades. An overly negative interpretation can significantly impact students’ motivation to learn. Be clear with students about what grades mean in your class.
- Not every assignment needs to be graded. Small stakes assignments can build and reinforce skills and not demand intensive grading.
- Share the work. Peer review provides students with the opportunity to give and receive feedback in a safe environment. Peer review, like collaborative assessment, also emphasizes student involvement in the grading process.