Sizing up the sorting issue

A system that can automatically sort small plastic tags could provide new uses for industrial vision technology. David Wilson reports.

Reducing the need for manual sorting is one of the big challenges for automation systems, and the development of ever more advanced machine vision technology is playing a key role in meeting it.

Now a team of specialists in the field has used state-of-the-art image-processing techniques to help the retail sector reduce the time and cost needed to deal with one of the most basic tools of its trade - the humble size tag.

High-street shops use the small coloured plastic tags with white numbers - known as cubes in the trade - on the hangers of clothing in stores.

Once an item is purchased and the unwanted cubes removed from the hanger, they must be collected from each of the stores and sent to a central depot where they must be sorted before they can be reused.

In the past, this has been a very time-intensive process involving tens of operators who were required to visually examine each of the coloured tags and place them into bins according to the numbers printed on them. Seeking to automate the procedure, one high-street chain commissioned engineers at Oxford-based Industrial Vision Systems and Birmingham-based RNA Automation to develop a system that could sort 300 cubes in one minute, removing the need for repetitive manual labour.

In the new system, thousands of cubes are loaded onto a large RNA bowl feeder from a bulk hopper full of parts that replenish the feeder continuously. The vibrating bowl feeder causes the cubes to proceed up a ramp on the curved surface of the bowl from which they are brought out in single file and then placed onto a linear conveyor.

A mechanical rotating indexer then removes each of the cubes from the track at specific intervals to create a specific repeatable distance between each of them before they are then placed onto a second conveyor. This carries each one through the heart of the sorting machine where a PC-based image processing system identifies each of the numbers on the parts.

To develop an image processing system capable of handling the high-speed recognition task, engineers at Industrial Vision Systems first needed to carefully determine the specific nature of the parts to be examined.

Initially, the engineers identified the fact that the specific number was repeated on the circumference of the cube three times. Armed with this knowledge, they recognised that two cameras positioned to cover a 270o field of view could capture an image of the number printed on each of the cubes whichever way the cube is facing as it travels down the conveyor.

‘The cubes could be conveyed through the image-processing system in any orientation, and even upside down, so it was important to ensure the system could pick out the number no matter how the cubes were presented,’ said Earl Yardley, director of Industrial Vision Systems. As a part enters the shrouded part of the system, a position sensor triggers the cameras to capture the image of the number on the part. The cameras in turn trigger a white LED light strobe, effectively synchronising the flash with the camera to illuminate the field of view at the exact moment that the part moves into it.

The two images from the cameras are then analysed by NeuroCheck image-processing software running on a PC to determine which of the images contains the largest feature that should be processed further, this is used to determine which camera has the full character in the field of view. Then, after a region of interest has been selected, the software uses a background colour segmentation process to extract the white number data from the coloured background of the cube, after which an optical character recognition function is used to identify the number on the cube.

Because the cameras capture the specific colour of the tags, this information can also be used to help to categorise the parts. Although the colour of the 64 variants of the cubes is not directly related to the numbers printed on them, the cubes could be grouped into different colour sets, which each contain a finite set of the numbers that are printed on them.

‘It is possible to relate the colour information captured by the cameras to a finite set of numbers from which an image from the tag could then be matched. This reduced the amount of processing that needed to be performed by the system and increased its accuracy,’ added Yardley.

In many image-processing applications, colour images are often classified by their RGB (or red, green and blue) values. But the engineers at Industrial Vision Systems recognised that they could identify each part faster by processing the images in the CIE Lab space, because the colour differences between the cubes can be more accurately determined in that space.

The sorting application was developed by Industrial Vision Systems using software that was developed by its sister company, Stuttgart, Germany-based NeuroCheck.

Digital Firewire cameras were integrated into the system using NeuroCheck’s device manager, which allowed the engineers to set camera parameters such as exposure time and gain. Using the software’s image analysis tools, the imaging functions that would best extract the features of the cube accurately could be determined.

The final program was then developed by selecting the sequence of functions from NeuroCheck’s standard check routines, a set of software modules that incorporate ready-to-run algorithms for performing the image capture, processing and control functions.

‘Clearly, each vision recognition task is different and so it’s important that the software can be easily tailored to recognise any part. In this case, the software was taught to identify each of the 64 colours of cube and place them into 28 groups prior to the optical character recognition. To do so, the colour data from thousands of cubes was captured as they passed through the system before determining an average value that could be used by the software to group them,’ said Yardley.

Once the software had determined the colour of the cube, the system then ran the found character shape through a multi-layer perceptron neural network classifier in order to determine the character on the cube. This is difficult as the cubes could be rotated in any orientation and therefore the characters are not consistently presented in terms of shape and size. A final decision is then made on which of the 68 final groups it belongs to.

‘To perform colour image processing and neural-network-based optical character in 120ms and ultimately decide which of the 68 variants the cube belongs to is cutting-edge vision system technology,’ claimed Yardley.

Once the image processing software had been effectively created and shown to be capable of determining the number on each of the cubes, the NeuroCheck device manager was again used to create a path through which the digital data representing the number on the cube could be transmitted via a hardwired interface from the PC to the programmable logic controller.

That data, together with data gathered from a digital encoder on the conveyor that is used to track the position of each cube, is used by the programmable logic controller to determine which of a number of pneumatic actuators placed in a array further down the linear track would be actuated to blow the cubes into a sorting mechanism and then into bins according the respective sizes printed on them.

The system is claimed to be almost 100 per cent accurate. Whenever parts cannot be classified, either because they are damaged or illegible, they continue travelling down the conveyor and are collected at the back end, either to be fed back into it or scrapped.

David Wilson