Invariant Pattern Location and Matching

We are developing pattern matchers for industrial (i.e., high-speed) appliations. Presently we focus on two 2D versions. One uses color and a perceptual organization step, another uses structural features.

Using color-based perceptual organization

Here we solve a problem of color-based pattern matching. A real-time vision system needs to detect and localize textile patterns woven into carpet flooring material. These patterns are distributed on a large web in a periodic fashion. The task to be solved is recognition of these patterns by matching them with stored prototypes, computing the exact location and use this information to guide a cutting machine to produce perfect replica of desired tiles. The pattern matching part is challenging because of the presence of distortion, scaling, and rotation of the 2D patterns, and rather high demands on the localization accuracy. Also, the task needs to be solved under real-time constraints. The building blocks used in our system are: color-based segmentation of the patterns to achieve 2D representation in a graph-like manner, followed by graph-based matching. This block solved the graph-isomorphism problem in real-time tolerating distortions, additions, deletions, rotation, translation, and scale variations between the trained and tested versions of the patterns.

Structural features

In this version of our matcher we cooperate with the University of Technology, Graz. The present focus in on invariant matching (rotation, scale, distortion) for complex patterns in printed circuit board production. We are presently working on a commercial release of a DLL package with ambitious performance parameters.

a) The user selects a training region in the image.




b) The matching software finds the pattern despite rotation and changes in scale, illumination, and distortions.



Performance data of the matcher

The pattern to be matched consists of building blocks (feature points). These building blocks can be line fragments,vertices, circles, ellipses, or blobs (as in the application described here). The core step of the matcher takes as input these feature elements. The matcher finds a trained pattern in an image despite geometric variation due to.

    • Additions/Deletions
    • Shape changes of the feature points
    • Distortion (i.e., relative motion of feature points)/li>
    • Rotation
    • Scale change
    • Specifically the performance data we have achieved are the following
    • Invariance to Translation: Localization accuracy 1/10 Pixel
    • Invariance to Rotation: 360 degrees
    • Invariance to Scale changes: 0.5 to 4.0
    • Invariance to Distortion: 10 Pixel
    • Invariance to Deletion/Addition (i.e., Occlusion): Up to 50\%
    • Invariance to Illumination, Gain- and Offset changes

Real-time processing

    • t < 3 ms for a search window of 768 x 512 Pixel and template size of 128 x 128 Pixel