Abstract

Objective – To compare users’ perceptions of 5 clinical information resources, and to assess the average number of questions answered after attempting 3 randomly assigned clinical questions on each resource.

Design – A combined task assessment, based on the design specification published in the Sixth Text REtrieval Conference (TREC-6) “interactive track,” and a user-satisfaction questionnaire developed from previously published surveys.

Setting – A health sciences library at a university in the United States of America.

Subjects – A convenience sample of 18 volunteers, who were either university health care staff or students.

Methods – A set of 15 clinical test questions was developed from previous studies. Participants were randomly allocated 3 test questions, which they then attempted to answer using each of 5 commercially available clinical information resources. Each participant was allocated a different set of test questions for each resource and did not attempt the same question on more than one resource. As part of the overall study design, the questions were randomised such that each question was paired with each resource at least once. The order in which the resources were tested by participants was also randomised. The resources tested were ACP’s PIER, DISEASEDEX, FIRSTConsult, InfoRetriever and UpToDate. Training in use of the resources was not provided as part of the study; however, participants were allowed to familiarise themselves with each resource before attempting the test questions. To simulate a clinical situation, participants were asked to spend a maximum of 3 minutes on each question. The number of questions successfully answered using each resource was recorded. Participants were also asked to complete a user satisfaction questionnaire, based on previously published questionnaires, for each resource after attempting the 3 questions allocated to that resource. The questionnaire used a 5-point Likert scale with participants asked to rate attributes such as clarity, ease of use, speed and accuracy of content. A final question also asked participants to indicate which resource they liked the best and which they liked the least. Participants also completed a background questionnaire, again based on previously published questionnaires, covering aspects such as age, gender, experience with searching and previous use of various information resources, including the 5 resources being tested.

Main results – Characteristics of participants – Participants ranged in age from 28 to 49 years (mean 35 years), and were experienced computer users with over 94% using a computer at least once a day. The male (42%) and female (58%) split of the group was roughly equal. The participants’ occupations were physician (44%), medical informatics student with previous clinical experience (28%), pharmacist (17%), nurse (6%) and MRI technologist (6%). Participants had been in their current profession for a mean of 8 years (range 1 to 20 years). Whilst 72% of participants reported familiarity with UpToDate, no more than 12% of participants reported familiarity with any one of the other information resources tested.

Clinical questions - Participants were able to answer more questions with UpToDate (average 2.5 questions) compared to the other resources, which ranged from an average of 1.6 (ACP’s PIER) to 1.9 (DISEASEDEX) questions answered. This difference was found to be statistically significant using the Friedman test.

User satisfaction - The user satisfaction survey results showed no significant differences in perceptions of the different resources in relation to accuracy, currency of content, speed or amount of information provided. However, UpToDate scored significantly higher (Friedman test) on ease of use, clarity of screen layout and how well it satisfiedparticipants’ needs.

Conclusion – A number of commercial information resources are now available that aim to help clinical staff make treatment decisions at the point of care. This study evaluates 5 such resources by comparing both success in answering typical clinical questions and the results of a user satisfaction survey.

The study indicates that participants were able to find significantly more answers when using UpToDate compared to the other resources tested. Whilst there was no statistically significant difference between the user perception ratings assigned to each resource with regard to speed, accuracy or amount of information provided, participant ratings for screen layout and ease of use significantly favoured UpToDate. In addition, significantly more participants identified UpToDate as the best resource.

Evaluations of clinical information resources have traditionally focused on user ratings of the content of these products. The findings of this study suggest that this approach may no longer be sufficient, and that evaluations that address the user’s experience (satisfaction concerning ease of use, speed, etc.) are also needed.