Abstract

This paper reviews three years of data measuring students' immediate reactions to a computer-assisted learning package in information skills and reports on work in progress to establish a more comprehensive programme of evaluation which will assess the longer term impact on learning of both the courseware itself and the way the courseware is delivered to students. The GAELS courseware was developed in the late 1990s as part of a collaborative project between the Universities of Glasgow and Strathclyde, with funding from the Scottish Higher Education Funding Council. The courseware was designed to teach higher level information skills and was initially developed for use with postgraduate engineering students; it has subsequently been adapted for use with students in other subject areas, including biological and physical sciences, and has been embedded for several years now in workshop sessions undertaken with postgraduate and undergraduate students across the Faculties of Science and Engineering at the University of Strathclyde. The courseware is introduced at the start of the academic session and made available on the Web so that students can use it as needed during their course and project work. During the first year, the courseware was used in isolation from other teaching methods (although a librarian was present to support students), whilst in the second and third years it was integrated into more traditional workshop-style teaching sessions (led by a librarian). Following work described in Joint (2003), library staff now wish to assess the longer term impact on learning of both the courseware itself and the way the courseware is delivered to students. However, the existing evaluation data does not adequately support this type of assessment. Teaching sessions are routinely evaluated by means of simple feedback forms, with four questions answered using a five-point Likert scale, collected at the conclusion of each session. According to Fitzpatrick (1998), such feedback forms measure students' reactions and represent but the first level of evaluation. Learning, which can be defined as the extent to which a student changes attitudes, improves knowledge and/or increases skill as a result of exposure to the training, is the second level and is not being measured with these forms. A more comprehensive programme of evaluation, including logging usage of the courseware outside teaching sessions and follow-up of students several months after their introduction to the courseware, is now being established to support a more meaningful assessment of impact of the courseware on student learning.