Login/Join

You are here

Machine Scoring of Student Writing

Programs that purport to read and evaluate student writing are being heavily marketed to K-12 and college administrators and teachers. Criterion, a product of ETS, is marketed directly to students as well through college bookstores. The amount of marketing muscle put behind these programs by companies like ETS, Vantage, and Intellimetric (now Pearson), gives us great concern. We assemble this collection so that teachers may know what is out there. With this information, should their school or school system consider adopting one of these machine-reading services, teachers will be able to act appropriately.

In this collection we argue, paraphrasing Ed White, this country's dean of writing assessment research, that writing to a machine is not writing at all—that we write to human beings, for human purposes, and that building a text for a machine-scoring program is not writing but some other activity, perhaps more closely related to game-playing than to human communication.

We begin with our own essay, where we make our argument and support it by creating a text for Criterion and then reporting and analyzing its response. Not surprisingly to us, Criterion's responses are either vague and misleading or entirely wrong. In the second resource, Carl Whithaus reminds us that students are already writing on word-processing programs with spell- and grammar-checkers that give them responses to their writing. Instead of condemning these computer-generated responses out of hand, Whithaus argues, we should teach students how to understand the machine-generated feedback. Our third resource is Beth Ann Rothermel's description of the ways in which machine-scoring programs are being marketed to K-12 schools and teachers—even being supplied to schools already-installed on computers that are bought in large batches. Rothermel describes the effects of the presence of these programs on the teacher, who now has to deal not only with the students' writing but with the machine's responses to that writing—which, if our own experiences with Criterion are any indication, will require a great deal of explanation.

Our fourth resource is a study by Anne Herrington and Sarah Stanley of the bias of Criterion, a machine-scoring program, toward a normalized English which, coupled with the programs' focus on word-level error, is harmful to writers generally but particularly so to English Language Learners. Our fifth and final resource is a position statement fro the College Composition and Communication Conference (CCCC), a subset of the National Council of Teachers of English (NCTE). This position statement is unequivocal: "Writing-to-a-machine violates the essential nature of writing." 

Creative Commons Licence

Comments

dogtrax's picture

This resource is being shared as part of a discussion in the NWP Tech List-serve, and at the NWP Connect Site.

http://connect.nwp.org/national/blog/9817/automated-scoring-writing#comm...

Thanks for the ways you not only compiled the resources here, but also framed them from your own research, Charlie and Anne.

Kevin

Tellio's picture

Something about outsourcing my life's work to an algorithm is oddly appealing.  Who wouldn't want to offload the "drudgery" of the hard scuffle that marking papers is?  Me.  It is in the whirring, crashing gears of the paper that I find the beautiful and apt phrase, the wee voice that is in all of us, and the misery of writing hopes deferred. Having gotten out my feelings here, your resource here is very helpful to a full understanding of a very important social issue--can cyborgs do our work safely?  Good on ya, Kevin.

anne-charlie's picture
on Oct 30 2012

Resources in this collection

Machine Reading of Student Writing
"Always Already: Automated Essay Scoring and Grammar-Checkers in College Writing Courses"
"Automated Writing Instruction: Computer-Assisted or Computer-Driven Pedagogies?"
Criterion: Promoting the Standard
CCCC Position Statement on Teaching, Learning, and Assessing Writing in Digital Environments