Off-campus UNL users: To download campus access dissertations,
please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.

Application of computer vision in surgical training and surgical robotics

Abstract

The capacity of Minimally Invasive Surgery (MIS) to reduce pain, expedite patient recovery, shorten hospital stays and improve cosmetic results compared to open procedures has made it a trend in abdominal surgery. However, difficulties related to the nature of equipment (difficult visualization, tool fulcrum effect, etc.) show the importance of specialized surgical training and a need to provide high-quality visualization to the surgeon. Simulation-based surgical training has been growing not only as an innovative way to teach surgery but also as a method to assess the skill of a surgeon in performing a procedure, without risk to patients. However, cost and ease of access are two important limiting factors which make most surgical simulators ill-suited for practical processing of high volumes of surgical trainees. Lack of feedback in existing portable simulators such as instrument distance traveled and task completion time is another problem which presents a trade-off between the size of simulators and their capability. To address the aforementioned problems, a cost-effective and portable simulator has been designed to imitate specific training tasks for laparoscopic surgery in virtual environments via image processing and computer vision. Image processing provides real-time mapping of the graspers in the workspace to the virtual reality (VR) environment in four different tasks (peg transfer, needle passing, cube transfer, and eye-hand coordination). The capability of using various actual surgical instruments suited for these specific procedures gives heightened fidelity to the simulator. Pilot testing of the system was carried out to validate the similarities of this device with an existing surgical training environment and assess the impact of their differences. In the second phase of the dissertation, a self-guided robotic camera control was designed to replace the camera assistant during robotic surgery. The system automatically tracks the robotic tools and automatically manipulates the camera to achieve the best field of view. Bench-top testing was performed and the results for both manual and automatic manipulation are presented.