Objectives: Objective of this project is the assessment of the reliability of the key performance indicators, which are used to control a radiology department.

Methods: Working environment is the radiology department of the New York University Medical Center (NYUMC). Timestamps and other data were directly drawn from the radiology information system (RIS). This work focuses on the reliability of the performance indicators "device utilization" (indicator PI1) and "planning accuracy" (indicator PI2). Device utilization is the percentage of a shift a device is utilized. Planning accuracy is the deviation of the actual exam duration from its scheduled preset. Examination begin and end timestamps are used to compute those indicators. To obtain reliable data from the RIS the following checks were applied:

CI1: Percentage of exams having the exam begin timestamp entered

CI2: Percentage of exams having the exam completion tracked

CI3: Percentage of exams having the exam begin occurred at least 2min before the exam completion

CI4: Percentage of exams having the exam duration ranging from 2min to 5 times the pre-set duration for that exam type

Descriptive statistics and data visualization techniques such as bar-, 3D-surface-, or contour charts (2D projection of 3D-surface charts) etc. were used to gain insights in the reliability of the indicators.

Results: Our examination revealed that the quality of the timestamps varies during the week. Example given, quality decreases on average by 10% on Thursday and Friday afternoons. Additionally, we detected a lack of precise semantics for some of the measures. We will discuss these findings and their impact on assessing and improving current and future clinical workflows using widespread and established performance indicators.