ï»¿We address the problem of multimodal image registration using a supervised learning approach. We pose the problem as a regres- sion task where we aim to estimate the unknown transformation from the joint appearance of both fixed and moving images. Our method is based on i) context-aware features, which allow us to guide the registration using not only local structural information, but also global appearance, and ii) regression forests to map the very large feature space to transfor- mation parameters. Our approach improves the capture range as shown in experiments on the publicly available IXI dataset. Furthermore, it also allows us to perform multimodal registration in difficult settings where other similarity metrics tend to fail, as demonstrated by the registration of Intravascular Ultrasound (IVUS) and Histology images.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.