Inviso imaging system inspired by the brain

An image processor inspired by the human visual system could open up a host of industrial applications.

Developing a complex system that mimics how the brain processes images requires a multi-disciplinary approach to systems design.

Not only does it involve the expertise of scientists who understand how the brain performs such image-processing functions and mathematicians who can create algorithms to model those processes, but also engineers who can develop custom software that can be deployed on silicon-based systems.

It might all sound far fetched, but it isn’t – it is, in fact, precisely the approach that has been taken at Netherlands-based Inviso, which recently took the wraps off an image-processing system inspired by the workings of the visual cortex of the human brain.

The ImageBOOST platform is based around a Xilinx FPGA

Founded by Dr Frans Kanters in November 2005, the company demonstrated its new system at the Stuttgart Vision show in November 2010 after a five-year collaborative development involving Prof Bart ter Haar Romeny of the Biomedical Engineering Department and Prof Luc Florack from the Mathematics Department at the Eindhoven University of Technology.

The Inviso system uses a set of proprietary mathematical algorithms that duplicate the way the brain extracts information from images – information that would otherwise be difficult, or even impossible, to detect by conventional means. As such, the algorithms could be used in a variety of industrial applications that are currently off limits to many existing image-processing systems. Dr Frans Kanters, president of Inviso, said that one reason for the effectiveness of the human visual system is that it is able to take into account contextual information found in an image when locating an object or identifying a specific area of interest.

’By using contextual information, the human visual system can essentially “fill in” certain details that might be missing from parts of an image – which would be impossible if such contextual information was not present in the image in the first place,’ he said.

Inviso’s process results in a stack of new images, each containing specific directional data about an object

The ability of the human visual system to take into account such information to identify details in an image is a result of the way that both the primary and secondary visual cortex in the brain work in concert to process the images that are received from the eye.

In the primary visual cortex, simple cells become active when stimulated by part of an image that is oriented in the same direction they are.

Complex cells then combine the data from several of these simple cells, from which they can detect the position and orientation of lines found in an image. Next, hyper-complex cells extract low-level contextual information from the image – notably the end points of the lines and where they intersect. This data is then passed to the secondary cortex, which performs further contextual associations on the data.

’The powerful mechanisms used by the brain’s primary visual cortex to detect specific structures in images by using such contextual techniques are exactly the process we have mimicked by algorithms that perform similar functions,’ said Kanters.

The Inviso system uses a set of mathematical algorithms

In the Inviso approach, a number of 2D mathematical image convolutions are first performed across an entire image. The process – which results in a stack of new images, each containing specific directional data about the features of the objects in the image – mimics the operation performed by the complex cells in the primary visual cortex.

In itself, however, this data would not provide any information about the relationship between the lines in an image, and so a 3D mathematical convolution is performed on the orientated stack of images to do just that.

’This operation extracts the line information from each of the images in the stack and interprets how the directional information found in each of the planes of the image stack relate to one another,’ said Kanters. ’In effect, the operation simulates the function of the hyper-complex cells in the primary visual cortex of the brain that extract crossing or end-point information, extracting a low level of context from the data set.’

But just as the secondary cortex in the brain must perform further contextualisation from the data presented to it from the hyper-complex cells, so the data produced from the Inviso algorithm that mimics the function of the hyper-complex cells must be processed further if more contextual data is to be extracted from it.

However, as researchers understand less about the exact means by which the secondary cortex carries out these functions, the Inviso team did not attempt to model the function of the secondary cortex directly.

The human visual system can take in contextual information

Instead, said Kanters, any additional mathematical operations on the data would be performed by algorithms written with a specific application in mind. These could perform any number of functions, such as selecting a specific area of interest from an image, enhancing a part of it or measuring the parameters of objects of interest within it.

The Inviso convolution algorithms that model the visual cortex have already proved effective at detecting the location of cracks in natural stone slabs. According to Kanters, owing to the nature of the stone, many false positives are detected by image-processing systems that cannot take into account the contextual data from images of the stone. But by using the new system, the detection quality can be increased and the number of false positives reduced.

In this application, the Inviso convolution algorithms that model the functions of the primary cortex were used to extract specific details about the orientation and relationship between the lines on the faces of the slabs.

“The human visual system can essentially ’fill in’ details that might be missing from parts of an image”


’That information was then processed by a second algorithm that determined how likely it was that the line information represented a crack in the stone,’ said Kanters. ’The algorithm determines how closely the low-level contextual data can be matched to a model that contains the typical parametric characteristics of cracks found in the slabs.’

In the brain, visual data is processed in parallel by all the cells in the primary visual cortex. To create this level of parallelism, the team created the Inviso ImageBOOST hardware platform, based around a Xilinx field-programmable gate array (FPGA) programmed to optimise the execution of the model of the primary visual system and the crack detection matching algorithm.

When the ImageBOOST platform was then fed image data from the stone slabs, it performed the extraction of the crossing or end-point information from the images, detected the likelihood of a crack at any point on the image and produced a resultant image showing where any cracks might be in any given stone slab.

To commercialise the intellectual property, Kanters and the Inviso team are looking to form joint partnerships with interested companies to develop application-specific systems based on the algorithms they have developed.

Kanters said his company has formed close relationships with several groups in the medical and aerospace business who have recognised the benefits of Inviso’s biologically inspired image processing methodology.