Facial imaging camera paves way for personalised surgery
New facial imaging technology could one day help doctors plan personalised surgery based on detailed computer models of patients’ heads.
Scientists at Cardiff University have used advances in LED and control technologies to develop a video camera that collects 3D data on the precise shape of a patient’s face, and are using it to assess the success of facial surgery.
They hope that by combining the imagery with CT scans of the bone structure and MRI data on muscle shapes they will be able to use it for bespoke surgical planning.
‘Eventually we can put all this together and we would have a full model of a face,’ Professor David Marshall from Cardiff University’s School of Computer Science & Informatics told The Engineer.
‘Then you could do full planning and reconstruction because you would know exactly where the muscles were, where to do the right cuts and where you would reattach muscles.’
The new camera captures two video feeds from slightly different angles and matching points on each image frame to corresponding points on the second image in order to build a 3D geometric model of a person’s face. A third feed is used to capture the visual texture of the face.
The camera projects a speckled pattern of light onto the face in order to make it easier to match corresponding points with one another.
Previous cameras have used infra-red light to do this but the Cardiff team, working with American firm 3dMD, have used the latest LED technology to replace this with pulses of visible light that flash so quickly they cannot be seen by the human eye but produce a more accurate model than has previously been achieved.
‘We’ve now got really good LED panels that give you the right amount of light and that can be pulsed accurately at this high frame rate,’ said Marshall. ‘The latest technology that has just come onto the market is a game changer because you can synchronise these panels together.’
Marshall and his colleagues are also working with psychologists to use the technology to study facial expressions, particularly during conversations, which could aid the development of more realistic animated or robotic faces.
To do this they have developed algorithms to perfectly synch two cameras, enabling them to model two moving faces as they interact.
Professor Stephen Richmond, Cardiff’s head of applied clinical research said: ‘Not only will we be able to objectively assess a patient’s functional outcome and how others in the community react to the outcome, our team will be able to start advancing computerized simulation models to replicate facial expression and functional behaviour for those patients undergoing treatment.’