As an educator, I continually struggle with knowing how to best evaluate writing assignments. Do I use a rubric? If I do, will the students read it or just look at the letter grade? Do I merely write comments on the papers, or combine that with a rubric? How many drafts can a student create before we both get sick of the paper? Am I putting in more work grading than the student is in writing the piece? Many teachers encounter these questions and more as we work with student papers.
For me, evaluating student writing has been a continual push and pull between wanting a streamlined, objective way to evaluate writing while also acknowledging the craft and artistry, the subjective aspect of writing. There may not be anything technically wrong with a piece of writing, but does that make it “effective”? This is what makes the arts and the humanities frustratingly wonderful and beautiful—and difficult to score because until we have a better system we must mark at least some student writing with grades.
During the past two years, I have muddied the waters even more for myself by creating a semester long poetry unit that ends with a class sponsored poetry café complete with food, music, lanterns, and readings of student-composed poetry. If evaluating essays is hard enough, poetry is even more challenging because the standards for poetry are so fluid.
After trying several different approaches, I have landed on something I am fairly happy with. It doesn’t solve all the struggles of evaluating poetry in a classroom setting, but it does honor the expertise of the teacher, the students’ own perceptions of their quality of writing, and the impressions of third-party readers.
I began by announcing that each poem would be scored using three weighted elements: their score, my score, and a peer’s score. The students debated back and forth about the values each score should have, and in the end the students landed on 40% for my score, 40% for their score, and 20% for the peer score. This is not necessarily a perfect ratio, but the point of having the students decide was to allow for conversation about who should have the most say in evaluating the quality of something: the expert, the creator, or the audience.
After trying out the system, a couple of practical considerations became clear:
- The students need a simple rubric that they can use to show the reasoning for their scores. This helps the student evaluations to be more objective.
- Some students may try to inflate their own grades. To address this, I told the students that the grades given by a peer or by themselves had to be within 10% of mine. If any were outside of this margin, they would receive my score for that evaluator’s component. This helps keep students from inflating or deflating their own or another student’s grade.
I am continuing to tweak the process, making it more relevant and reliable as an evaluation tool. It may serve as a valuable tool for evaluating more formal writing as well. By involving students in evaluating their own work and the work of other students, they become more active learners who think critically about the qualities that make a piece of writing excellent.