Revisiting Writing Assessment

“Writing assessment is thus both hero/ine, the practice that brings us into relationship with our students, and villain, an obstacle to our agency.” (Yancey, 167)

To be honest, I feel like both Kathleen Yancey’s “Writing Assessment in the Early Twenty- First Century” and John Bean’s “Using Rubrics to Develop and Apply Grading Criteria” only confirm and further affirm points we have been making in class and through our blogs all semester. Mainly, that while writing assessment has certainly made strides in the past few decades, it is still in need of much improvement to actually address what it should be addressing: student learning and progress. Much writing assessment is more concerned with meeting academy standards than with adequately and helpfully assessing student work. The institutions instead of the individuals are prioritized. A reliable means of collecting results is prized over valid assessment of skill. Generalization is prized over individualized learning. While Bean discusses how this difference of priorities affects rubrics, Yancey looks more at how writing programs themselves are affected when standardization sinks its claws in.

Since much has already been said about the failings of contemporary writing assessment as a whole, I’m just going to focus on what Bean says about designing rubrics to avoid some typical pitfalls.

Until now, much of our research has been looking at writing assessment in an overview. Here, Bean focuses on one particular kind of writing assessment–the rubric. Not going to lie, I went into this reading with a rather strong bias already in place. See, there is no love lost between me and rubrics. I find even the best I’ve come across personally to be difficult to interpret. Really, they’ve always seemed like “cop-outs” for teachers and professors who don’t want to actually have to think bout the work they’re grading. They can select a number and be done with it. The criteria listed is somehow enough adequate justification for the grade. Students who struggle to interpret rubrics or who don’t write well using strict guidelines inevitably suffer.

Perhaps I’ve just been in contact with too many generic-style rubrics, though.

In this article, Bean identified many different styles of rubrics but 2 overarching kinds that most fall into–generic or task-specific (analytic or holistic). Generic rubrics seek to be universally applied to writing whereas task-specific rubrics are unique unto each assignment, with criteria aimed toward particular aspects of the work. Clearly showing some preference I can get behind, Bean says, “…A generic rubric can’t accommodate the rhetorical contexts of different disciplines and genres.” (279) Despite my general distaste for most writing assessment, I did find some of the task-specific rubrics Bean shared to be not awful (Figures 14.6 and 14.9, to be specific). While I’m all for allowing students opportunities to explore themselves through their writing and to just write, I also understand that certain genres have certain standards that students must meet. Thus, there has to be a way to inform students of their ability to meet those standards when it comes to those genre-specific works. Task-specific rubrics, I think, when formatted with room for students to still perform explorations, provide a means for both educators and students to maintain their ow agencies while also learning how to write for more standardized genres.

At least, task-specific rubrics at least try to find a balance between the academy, the educators that must work for it, and the students that must learn within it.

Yancey’s work provided a look into how the academy uses the data it receives from assessment to structure its writing programs. I thought this article paired rather well with Bean’s in that respect. I do not think too much of our research thus far has concerned itself with how institutions not only use assessments to inform their programs but how programs themselves are assessed. According to Yancey, it seems most writing programs are themselves assessed similarly to how they assess writing. In fact, a “good” program creates a kind of “feedback loop” where how well student writing meets learning objectives informs the propriety of the program’s learning objectives and instruction.

Aside from the few programs Yancey identified, though, this seems more like an ideal model of practice than a practical one. Also, too many programs seem more concerned with, again, meeting national standards than with developing local criteria for assessment. Or, more, with allowing local criteria to be as valued. I’m torn on exactly how I feel about this, to be honest. Allowing local faculty to develop their own criteria for assessment seems the best way to allow for more, as Yancey calls it, “authentic writing assessment” but having a program rely entirely upon that criteria may create writing that is too insular and unable to join the global dialogue. In my opinion, writing instruction that doesn’t adequately prepare students to join many contexts or provide them with opportunities to explore diverse contexts is ultimately doing a disservice. What Yancey’s reading leaves me with, then, is a conundrum: can authentic writing assessment and authentic writing coexist? Must one be sacrificed for the good of the other?

I’m very interested to hear what my peers got out of these readings!

****

~Till next time~

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s