Image-guided Navigation

The use of minimally invasive and flexible access surgery has imposed significant challenges on surgical navigation as the operator can no longer have direct access to the surgical site with unrestricted vision, force and tactile feedback. How to combine prior knowledge of the anatomical model with subject specific information derived from pre- and intra-operative imaging is a significant research challenge. Furthermore, effective surgical guidance is essential to the clinical application of dynamic active constraints for robotically assisted MIS and the development of new surgical tools for emerging flexible access surgical approaches such as Natural Orifice Transluminal Endoscopic Surgery (NOTES) and Single Incision Laparoscopic Surgery (SILS).

For image-guided surgery, another key requirement is the augmentation of the exposed surgical view with pre- or intra-operatively acquired images or 3D models. The inherent challenge is accurate registration of pre- and intra-operative data to the patient, especially for soft tissue where there is large-scale deformation. With the increasing use of intra-operative imaging techniques, our work has been focussed on the development of high- fidelity Augmented Reality (AR) techniques combined with real-time surgical vision and new robotic instruments for further enhancing the accuracy, safety, and consistency of MIS procedures.

Research themes

Research themes

Surgical workflow analysis

Surgical episode segmentation based on vision, instrument usage and kinematics, and video-oculography (eye tracking); workflow recovery for intra-operative CAD (computer aided decision support) and machine learning; analysis and modelling of surgical workflow with consideration of perceptual and cognitive factors, as well as team interaction for detecting dis-orientation, hesitation and precursors to surgical errors.