A number of scholars have questioned the practice of assessing academic writing in the context of a one-off language test, claiming that the time restrictions imposed in the test environment, when compared to the writing conditions typical at university, may prevent learners from displaying the kinds of writing skills required in academic contexts. Studies which have explored this issue have thus far produced conflicting findings. This paper investigates the impact of an efficiency driven policy decision to reduce the time allowed for performance on a post-entry diagnostic test of academic writing from 55 to 30 min. It does so by comparing the performance of 30 test takers under both old and new time conditions. A fully counter-balanced design was chosen to establish whether the different time limits had an effect on (a) the writing scores, (b) the inter-rater reliability and (c) the quality of the discourse. Test takers' perceptions were also canvassed via a post-task questionnaire. Findings showed that the test takers' scores on the analytic rating criteria were not significantly different under the two time conditions, although high proficiency candidates profited more from the extended time allowance than did the others. Ratings were equally reliable in the "short" and "long" condition. The detailed discourse analysis illustrated that the longer writing condition yielded, as predicted, a better quality performance on a number of variables; however the performance on the majority of the variables was unaffected by the time factor. The questionnaire data nevertheless showed that students in general preferred having more time for planning and revising. The study considers the implications of these findings for the validity and fairness of diagnostic writing tests. (Contains 10 tables.)