Pattern matching

A new geometric-based search technique is proving to be extremely reliable in locating the position and orientation of a pattern despite highly degraded images.

Finding patterns and reporting their location quickly and accurately is essential in today's high-precision electronics and semiconductor manufacturing environments. Traditional pattern recognition tools (correlation) cannot cope with changes due to process variations and adverse conditions. A new geometric-based search technique is proving to be extremely reliable in locating the position and orientation of a pattern despite highly degraded images.

Machine vision has evolved to become a mainstream automation tool enabling computers to replace human vision in high speed and high-precision manufacturing applications. Today, machine vision is being used to automate processing and ensure quality in manufacturing everything from diapers to the most advanced computer chips.

Two industries that have "pushed the envelope" of machine vision technology are semiconductors and electronics. In these segments, machine vision tools are used to precisely guide a variety of robotic handling, assembly, and inspection processes. The most significant challenge for machine vision in these automated applications is maintaining the ability to locate reference patterns despite changes in material appearance.

Normal process variations can produce a number of unpredictable conditions, including contrast reversal and intensity gradients, angular uncertainties, blur caused by changes in depth of field, scale changes, and partial obliteration or missing features.

The traditional approach to machine vision has been normalized greyscale correlation - a pattern matching technique that compares the shading between an image being inspected and an image that the vision system is trained on. For many years, correlation was the algorithm of choice for most machine vision system applications. It was reliable, easy to implement, and relatively undemanding for earlier generations of computing technology.

Process Variations Present Problems

Unfortunately, most greyscale correlation tools have difficulty coping with changes in appearance after being trained on a particular pattern. While traditional correlation tools are adequate for locating patterns under ideal conditions, they exhibit low tolerance to image changes in scale, angle, blur, obliteration, and contrast variation.

With the demands of advanced manufacturing processes, there are new and challenging conditions that correlation cannot overcome. For example, normalized greyscale correlation would be heavily confused by nonlinear contrast changes, where the greyscale value changes unpredictably on part of an image. Imagine a grey bar where the outline stays the same, but the inside turns from dark to light. Since a correlation system would attempt to match a pattern that it learned purely on greyscale values, it has a built-in problem with this type of intensity gradient or nonlinear contrast change.

The traditional approach using normalized greyscale correlation would be to work around or simply avoid manufacturing processes that would create optical problems that the vision system would be unable to cope with. For example, if the inspection process produced specular reflection - bright patches where light reflects back toward the viewer--users probably would have worked around the problem by changing the lighting, adding special filters, or repositioning the machine in relation to the camera. In today's environment, these steps would put constraints on the manufacturing process that would very likely be unacceptable. In the semiconductor and electronics industries, there is often no feasible workaround when optical conditions are less than ideal.

Training on the Edges

However, there is another different way to approach these vision system challenges, that is geometric pattern matching. Rather than evaluating greyscale patterns within an image, this technique trains on the edges. It then fits the edges to a geometric model, which it uses to detect corresponding features, and the target object, in an incoming image.

Vision challenges such as contrast reversal and intensity gradient that would be daunting to a correlation-based system are no longer an issue because geometric pattern matching is not keyed to greyscale values, only to the position of edges. The system would not train on a grey bar, for example, but the fact that an object is a bar in the first place; it would then search for the presence of the bar in new images. Even if the bar is a different shade, the geometry remains the same. It may no longer be grey (which would result in poor correlation), but the structure (the pattern and edges) are still there.

Geometric edge detection is actually not new; MIT developed the algorithm in the 1950's. However, it saw little commercial use because the amount of computing power required was too expensive and complex to install on a manufacturing floor. The technique remained relatively dormant for several decades, as the limitations of correlation became increasingly apparent for emerging challenges in semiconductor and electronic applications.

Today, with the greater power of off-the-shelf processors, geometric pattern matching has become both technically and economically feasible.

What's In a Score?

Both greyscale correlation and geometric pattern matching produce a score that ranks the closeness of the match. However, some scores give more useful measurements than others. Correlation is only able to provide a "best guess" that does not necessarily indicate a valid match.

To illustrate, suppose the correlation system is trained on a black square, and examines a blank wall with a black square in the middle. Correlation will be poor in the white wall, and good in the middle, where the system returns a peak value of where the black square was best correlated with the new scene. But what if the object was a black circle? It would still be far better correlated with the black square than the white wall. The score won't be perfect, but will be relatively high. But what does it mean? Unfortunately, we simply don't know. Perhaps the score wasn't perfect because of some optical anomaly, or maybe it wasn't the object you were looking for.

Obviously, there's a lot of guesswork in scores based on correlation. You could even train on a map on the wall, then examine a window shade and possibly get a score of 30. The conventional wisdom is to settle on a specific value (such as 70 or better) and accept scores that are higher and reject those that are lower. The problem is that even with passing scores, you still don't know if you have found a match, or just something that resembles the desired object.

Returning Meaningful Results

Geometric edge detection would approach the problem differently, by measuring how well the edges of the square that the system was trained on match up to the edges of the circle. It can also return a number of sub-scores that provide specific details about the image and its contents.

These include XY location, rotation from the trained reference image, contrast variance, geographic score (how well the edges matched up), and percentage of conformance (how many of edges were found). By analyzing the structure, the vision system can provide much more concrete information about whether an object is present than the "best guess" attempts provided by greyscale correlation.

For example, suppose the vision system is trained to detect a cross, and it examines a new image that has one of its arms nipped off. The system may return an overall score of 75 to indicate that most of the object is detected. However, other measurements could be performed that might be more significant depending on the application, for example, the system could detect that three edges fit perfectly but one edge was missing. It's up to the user to choose which scores are of interest; the user may only be interested whether the overall score is above or below a specified value, or may be interested in more specific information that could be helpful.

Real World Examples

Geometric edge matching is well suited for a wide variety of operations in the semiconductor industry where there is a need to precisely align wafers or die so that activities such as lithography, cutting, placing, and/or bonding can be performed to extremely tight tolerances. One example is Pick & Place, where a robot picks up parts from a tray or waffle pack.

The robot moves to where the parts are supposed to be, snaps a picture, then calculates the location with great precision in order to pick up the component and position it onto a printed circuit assembly. In a more challenging scenario, components could be scattered about a shaker table, with their orientation and location not known. Edge detection tools would be able to locate the parts, then rotate them as needed. In contrast, correlation would be stymied by the rotated patterns.

Another example can be seen in CMP (chemical/mechanical polishing), a chip fabrication process that involves polishing a wafer to remove surface debris. As the top of the wafer is smoothed away, features become smaller and closer together, like a canyon closing in toward the bottom. Scale differences as a result of this process would create problems for correlation attempting to match greyscale values. However, a vision system using edge detection can still identify the structure and edges based on the geometric model on which it was trained.

Adapting to the Process

The emergence of edge detection vision tools does not mean the imminent demise of normalized greyscale correlation. There will be situations where correlation will be remain feasible, even optimal, such as when examining closely similar images that are not susceptible to major variations. However, with its greater accuracy under adverse conditions, geometric edge detection is becoming the pattern matching technique of choice for complex and challenging applications.

Companies such as Coreco Imaging are working to expand the capabilities of geometric edge detection, both in terms of accuracy and also in the range of information available to intelligently evaluate an image. As industries implement new manufacturing processes with increasingly variable conditions, we will see machine vision tools become more adaptive and precise even when presented with less than perfect images.