Purpose :
Telemedicine networks are being deployed in several countries for mass screening of retinal pathologies, like diabetic retinopathy or age-related macular degeneration. Images are acquired using different fundus cameras models. The quality of the resulting images can be insufficient for interpretation by ophthalmologists or automated systems. With the use of handheld retinographs, this problem will only worsen.

An efficient method is presented to estimate the quality of eye fundus images using a relatively simple convolutional neural network. One of the objectives of the method is to give feedback to the user during acquisition so that an image can be re-acquired if it is considered ungradable. The method is based on the estimation of the visibility of the fovea and surrounding vessels.

Methods :
A database was constituted with 6098 images extracted from the e-ophtha database, provided by the OPHDIAT telemedicine network. When the fovea and surrounding vessels were considered visible, the center of the fovea was manually marked on those images. Pre-processing includes image green channel subsampling to 128x128 pixels, as tests showed that this resolution is good enough for the task.

The method is using a purely convolutional neural network, simple enough in order to speed-up prediction and reduce energy consumption. The network learned from the database to predict a 20-pixel diameter disk centered on the fovea, when visible, or nothing, when not visible. A post-processing step based on mathematical morphology gives the final segmentation result. If a single connected component is predicted by the network, its centroid is considered as the center of the fovea.

Results :
The accuracy of the method is 96.4%, and it correctly identifies ungradable images in 98,7% cases. Given the subjective aspect of the quality evaluation, this result is very satisfactory.

The precision of the fovea position, when detected, is measured on our database, as well as on the Aria database. The mean test errors are respectively equal to 0.95 and 1.4 pixels, and the maximal errors equal to 4.85 and 6 pixels.

Conclusions :
The presented method paves the way towards the deployment of embedded and fully automatic quality estimation of eye fundus color images and decreases the number of ungradable images. It will be commercialized by Messidor (http://messidor.vision).

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.