Keyhole surgery could become easier thanks to new software that adds a virtual map of a patient’s body to the doctor’s video feed.
Researchers in London are developing a form of ‘augmented reality’ for robotic-assisted laparoscopic surgery, where 3D video images from inside the body are overlaid with data collected from an MRI or CT scan.
This could allow surgeons to be much more precise as they move their instruments inside the body and help them avoid damaging tissue that they cannot see with an internal video camera alone.
‘You can see structures inside the organs and in that way you may be able to direct the surgery more accurately, for example excise a tumour with better margins or protect a blood vessel or nerve from accidental damage,’ research leader Dr Danail Stoyanov of University College London told The Engineer.
The system builds on existing research into algorithms that can calculate the geometric co-ordinates and movement of the contents of images captured with 3D stereoscopic cameras.
This can be used to determine the position of visible organs in real time and then matched with the virtual map of the body provided by the MRI or CT scan.
‘The real challenge is that most of the existing work is based on rigid environments or on environments where light reflectance can be simplified,’ said Stoyanov, a Royal Academy of Engineering/EPSRC research fellow at UCL’s Centre for Medical Image Computing (CMIC) and Department of Computer Science.
‘In surgery the tissue is deformable and dynamic plus it is wet so the light response can be varied and changes depending on where you are looking at the tissue from.
‘Our work is to overcome these challenges by building algorithms that work in such an environment and also work in real time (near video frame rates).
‘To this end we have developed matching strategies to correspond structures between images so that 3D triangulation is possible with laparoscopic cameras.’
As well as developing image-guided surgery techniques with Prof John Kelly of Chitra Sethia Centre for Robotics and Univery College London Hospitals, Stoyanov is now working with Neil Tolley and Asit Arora of St Mary’s Hospital, Imperial College NHS Trust to apply the software to transoral robotic surgery (performed at the base of the tongue).
He is also working with Dr Dan Elson of the Biophotonics and Surgical Imaging Lab, at Imperial College London’s Hamlyn Centre for Robotic Surgery, to see if a better understanding of the interaction between the tissue surface and light based on the geometric data can imply other information about the health of the tissue.
The system is being designed to work with existing robotic surgery hardware but will require a user interface that indicates how much uncertainty there is about the information portrayed.
‘Ultimately the surgeon always has to make the final call but we need to give them as much information as possible without confusing them or giving them the wrong information,’ said Stoyanov.
The software also needs to be validated to confirm how much of an advantage it will provide to surgeons and identify the procedures it will benefit the most, but Stoyanov hopes it could be available to doctors within a few years.