Abstract

Only if we can estimate the colour of the prevailing light - and discount it from the image - can image colour be used as a stable cue for indexing, recognition and tracking (amongst other tasks). Almost all illumination estimation research uses the angle between the RGB of the actual measured illuminant colour and that estimated one as the recovery error. However here we identify a problem with this metric. We observe that the same scene, viewed under two different coloured lights for the same algorithm, leads to different recovery errors despite the fact that when we remove the colour bias due to illuminant (we divide out by light) exactly the same reproduction is produced. We begin this paper by quantifying the scale of this problem. For a given scene and algorithm, we solve for the range of recovery angular errors that can be observed given all colours of light. We also show that the lowest errors are for red, green and blue lights and the largest for cyans, magentas and yellows. Next, we propose a new reproduction angular error which is defined as the angle between the image RGB of a white surface when the actual and estimated illuminations are 'divided out'. Reassuringly, this reproduction error metric, by construction, gives the same error for the same algorithm-scene pair. For many algorithms and many benchmark datasets we recompute the illuminant estimation performance of a range of algorithms for the new reproduction error and then compare against the algorithm rankings for the old recovery error. We find that the overall rankings of algorithms remains, broadly, unchanged - though there can be local switches in rank - and the algorithm parameters provide that the best illuminant estimation performance depend on the error metric used.