Abstract

Reading-into-writing assessments on pre-sessional language programmes typically employ either a take-home essay format with a substantial reading component or an exam-based writing task with a reading component of perhaps only one or two pages. While both approaches reflect a welcome trend towards more integrative models of validity for the assessment of academic writing, their usefulness may nevertheless be undermined by their task design. The apparent recent increase in the activity of ghost writers, often facilitated by various technological means, can cast sufficient doubt over the authorship of take-home essays to invalidate the assessment. The exam-based task, on the other hand, may suffer from construct under-representation (Messick, 1996: 6) since its limited reading component requires little or no expeditious reading (Weir and Urquhart, 1998: 98–100) of the longer texts commonly associated with university study. This article describes a response to these validity issues in the form of an open-book-exam, concluding that the processing of longer texts outside the exam room combined with the security of a written response under exam conditions can reduce the time spent on dealing with plagiarism cases arising from outside assistance while at the same time demonstrating some positive washback on learning (Messick, 1996: 6) in terms of increased engagement with the source texts used.