Registration of Retinal Images.

Yogesh Babu Bathina

Ever so often a need arises in clinical scenarios, for integrating information from multiple images or modalities for the purposes of diagnosis and pathology tracking. Registration, the most fundamental step in such an integration, is the task of spatially aligning a pair of images of the same scene acquiredfrom different sources, viewpoints and time. This thesis concerns the task of registration specific to three most popular retinal imaging modalities namely Color Fundus Imaging (CFI), Red-Free Imaging (RFI) and Fluoroscein Fundus Angiography (FFA). CFI is obtained under white light which enables the experts to examine the overall condition of the retina in full color. In RFI, the illuminating light is fil-tered to remove red color which improves the contrast between vessel and other structures. FFA is a set of time sequence images acquired under infrared light after a fluorescent dye is injected intravenously into the blood stream. This provides high contrast vessel information revealing blood flow dynamics,leaks and blockages.

Retina is a part of the central nervous system (CNS) which is composed of many different types of tissues. Given this distinctive feature, a wide variety of diseases affecting different body systems uniquely affect the retina. These Systemic diseases include Diabetes, Hypertension, Atherosclerosis , Sickle cell disease, Multiple sclerosis to name a few. Recent advancements reveal a close association of retinal vascular signs to cerebrovascular, cardiovascular and metabolic outcomes. Simply put, the health of blood vessels in the eye often indicates the condition of the blood vessels (arteries and veins) throughout the body.

Registration of multimodal retinal images aids in the diagnosis of various kinds of retinal diseases like Glaucoma, Diabetic Retinopathy, Age Related Macular degeneration etc. Single modality images acquired over a period of time are used for pathology tracking. Registration is also used for constructing a mosaic image of the entire retina from several narrow field images, which aids comprehensive retinalexamination. Another key application area for registration is surgery, both in the planning stage and dur-ing surgery for which only optical range information is available. Fusion of these modalities also helps increase the anatomical range of visual inspection, early detection of potentially serious pathologies andassess the relationship between blood flow and the diseases occurring on the surface of the retina.

The task of registering retinal images is challenging given the wide range of pathologies captured via different modalities in different ways, geometric and photometric variation, illumination artifacts, noise and other degradations. Many successful methods have been proposed in the past for the registering retinal images. A review of these methods show good performance over healthy retinal images. How-ever, the scope of handling a wide range of pathologies is limited for most of the approaches. Further, these methods fail to register poor quality images, especially in the multimodal case. In this work, we propose a feature based retinal image registration algorithm capable of handling such challenging image pairs.

At the core of this algorithm is a novel landmark detector and descriptor scheme. A set of landmarks are detected on the topographic surface of retina using Curvature dispersion measure. The descriptor is based on local projections using radon transform which characterizes local structures in an abstract sense rendering it less sensitive to pathologies and noise. Drawing essence from the recent developments in robust estimation methods, a modified MSAC(M-estimators Sample and Consensus) is proposed for false correspondence pruning. On the whole, the minor contributions at each stage of feature based reg-istration scheme presented here are of significance. We evaluate our method against two recent schemes on three different datasets which includes both monomodal and multimodal images. The results show that our method gives better accuracy for poor quality and pathology affected images while performing on par with the existing methods on normal images.