Syntactic Fluency, Objective Writing Assessment, The Self, & Other Illusions

“Meaning is, after all, located in more and other than correlations: it is intellectual and rhetorical substance.” (Yancey, 491)

I counted it as a blessing when I tested out of having to take a placement test for English at my university. Three cheers for AP English and all that jazz~

But, how ludicrous is that?

Because I took a test, I didn’t have to take another test. I had already been assessed by the powers that be and found “adequate”, acceptable. Like, my answers and my self are some kind of interchangeable parts–constructed to work just so in the larger system.

Talk about an illusion of agency. A false sense of self.

In the article, “Looking Back as we look forward: Historicizing Writing Assessment”, Kathleen Blake Yancey just does that–discusses the attempts of writing assessment to, well, actually assess anything meaningful as well as the practice’s failings to do so. To do this, Yancey explores the history of writing assessment, something she classifies into three waves. The first wave, structured almost entirely by pschometricians, was purely empirical, relying on responses to multiple choice questions. The second wave attempted to assess students through more subjective–yet still empirical–means by having them actually write a short essay sample that would then be holistically “scored” by “trained” readers. In this method, a standard writing sample for each score was required for these readers to use to compare student work to. And, lastly, the final wave concerns itself with cumulative progress by focusing on a portfolio of work and not a solitary writing sample or test. Also, because this method advocates for a caucus of professors to assess the work instead of “trained” readers outside the collegiate sphere, professors can assess themselves and their own practices as well. Win-win, right? In theory, yes. In practice…

Not so much.

Validity and reliability are core concerns of all of these methods/theories. And, to varying degrees, each method attempts to address one or the other (because apparently it is imperative everything be in constant competition somewhere Peter Elbow is rolling his eyes). Sadly, it is the sacrifice of one for the other that seems to invalidate all of these methods. The emphasis of objectivity and reliability in the first two methods diminishes authorship which diminishes authenticity which ultimately makes any assessment of the written work one of the writer’s ability to follow instructions and memorize mechanics and not an assessment of their ability to generate thoughtful content. Essentially, you end up looking at words on a page in a circumscribed form/framing and little more. And, the third wave’s focus on the personal and the subjective in writing inherently robs it of reliability. It’s subjective. That’s the point: there is no standard. I know this portfolio method, through means of a kind of tribunal, seeks still to “score” student work but I believe Yancey addresses the overall issue with that when she brings up, “whether a single holistic score is appropriate given the complexity of the materials being evaluated.” (494) Despite mentioning this concern, Yancey does go on to discuss the benefits of the application of this method but, if you want my opinion, it seems this method more helps professors and other writing professionals assess themselves and their practices than it actually assists student writers in understanding where they “stand”, so to speak.

So, that’s a pass for me on all of this writing assessment at least in regards to most of this particular nonsense on the subject.

I have some serious reservations about contemporary writing assessment. None of the presented theories of practice really appealed to me. Mainly, as I’ve already touched upon and as Yancey was certain to point out, the reason for this is that all these assessments seem to convey is how well students can follow instructions–both in writing and in thinking. The latter of which greatly concerns me, to say the very least.

It really doesn’t help that I know I’m a victim of these methods either. I’m no stranger to receiving a 4, 5, 6 on different papers in one class and wondering, “what does this mean?” Sure, I could read through the rubric–which I don’t recall a teacher in high school ever going over–but, honestly, the language of those things serves only to communicate to me that my writing is only as good as my scorer/teacher’s interpretation of its ability to meet the listed criteria. But that’s an ideal situation.

I got a 4 out of 5 on my AP English exam–for that, who am I supposed to even ask why? These scorers have access to my work but I have no access to them. That’s the story for standardized tests, at least. As for teachers, they have a standardized test to prepare us for. They always do. So, they score us similarly–to prepare us, right?

Bitter, who’s bitter?

Anyway, as you can no doubt imagine and if my thoughts on Yancey’s work are anything to go by, there’s little love lost between me and Robert J. Connors’ “The Erasure of the Sentence”. I understand where’s he coming from–that bending to popular trends can severely impact potentially valuable research–but I just have little interest or, really, personal investment in “syntactical fluidity” or the like. Sorry to say, I’m with the anti-formalists and the anti-empiricists on this one.

There is certainly something to be said for syntax and, more, it’s relation to rhetoric and “good” writing as a whole, but I’ll side every time with the notion that content trumps form. Knowing the conventions of SWE doesn’t automatically make you into a better writer or mean you will generate more thoughtful or meaningful content. In the short-term, yes, it may advance your writing. But, there’s simply no evidence to support it has any long-term applications. And, I get that that is Connors point–there should be more research. Believe me, I do. I just also believe that there are more beneficial–for all– subjects to be researching. The syntactical concerns Connors’ is giving credence too in this article are SWE-specific and so are, ultimately, exclusive. Restrictive, etc. We’re talking about how to make better academic writers here. At least, that’s what I got from this article. It’s very epistemological and not so personal. And, my overall assessment of it is that it robs voice and seeks to make all writing sound the same. Not my cup of tea.

In essence, I’m pretty anti-establishment when it comes to writing studies. At least, that seems to be what I’m learning about myself… Anyway, I hope you enjoyed my little perspective here on writing assessment and writing theory. Maybe found it insightful? If not, I’ll settle for entertaining.

~Till next time~

(Am I the only one who read Francis Christensen as Hans Christian Andersen and was like, “what does The Little Mermaid have to do with this?” Just me?)

 

Advertisements

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s