In an introductory course in information technology at the University of Otago the acquisition of practical skills is considered to be a prime objective. An effective way of assessing the achievement of this objective is by means of a `practical test', in which students are required to accomplish simple tasks in a controlled environment. The assessment of such work demands a high level of expertise, is very labour intensive and can suffer from marker inconsistency, particularly with large candidatures.
This paper describes the results of a trial in which the efforts of one thousand students in a practical test of word processing were scored by means of a program written in MediaTalk. Details of the procedure are given, including sampling strategies for the purpose of validation and examples of problems that were encountered.
It was concluded that the approach was useful, and once properly validated gave rise to considerable savings in the time and effort.

en_NZ

dc.format.mimetype

application/pdf

dc.publisher

University of Otago

en_NZ

dc.relation.ispartofseries

Information Science Discussion Papers Series

en_NZ

dc.subject

computer-aided learning

en_NZ

dc.subject

automated scoring

en_NZ

dc.subject

computer education

en_NZ

dc.subject

test validation

en_NZ

dc.subject.lcsh

QA76 Computer software

en_NZ

dc.title

Automated scoring of practical tests in an introductory course in information technology