Semi-supervised method for training multiple pattern recognition and registration tool models

Information

  • Patent Grant
  • 9679224
  • Patent Number
    9,679,224
  • Date Filed
    Wednesday, July 31, 2013
    11 years ago
  • Date Issued
    Tuesday, June 13, 2017
    7 years ago
Abstract
A system and method for training multiple pattern recognition and registration models commences with a first pattern model. The model is trained from multiple images. Composite models can be used to improve robustness or model small differences in appearance of a target region. Composite models combine data from noisy training images showing instances of underlying patterns to build a single model. A pattern recognition and registration model is generated that spans the entire range of appearances of the target pattern in the set of training images. The set of pattern models can be implemented as either separate instances of pattern finding models or as a pattern multi-model. The underlying models can be standard pattern finding models or pattern finding composite models, or a combination of both.
Description
FIELD OF THE INVENTION

The present invention relates to machine vision where images of objects are obtained using a camera or other imaging device where locating a target pattern in the image corresponds to locating the pattern on the object being imaged.


BACKGROUND OF THE INVENTION

A challenge in machine vision systems is to make them user friendly and accessible to a broader range of potential users. There are certain aspects that users understand clearly (for example, how to generate a set of training images) and what the ground truth of the situation is. However, beyond that, many of the aspects of training and run-time operation of the machine vision systems will be more difficult to apply.


In machine vision where images of objects are obtained using a camera or other imaging device and where a pattern on the object being imaged is located using a method that executes on a computer or other computing device. Given a set of images, each of which contains at least one instance of a target pattern, but where the target pattern may vary in appearance, it can also be a challenge to identify and train a minimum set of pattern recognition and registration models that are applicable for all images in the image set. The pattern recognition and registration procedure is described in greater detail in U.S. Pat. Nos. 6,408,109; 6,658,145; and 7,016,539, the disclosures of which are incorporated by reference as useful background information. If a pattern is recognized, the pattern recognition and registration procedure (or “tool”) confirms that the viewed pattern is, in fact, the pattern for which the tool is searching and fixes its position, orientation, scale, skew and aspect. An example of such a search tool is the PatMax®. product available from Cognex Corporation of Natick, Mass., USA. The pattern recognition and registration procedure is a method of geometric pattern finding. The methods described herein apply generally to geometric pattern finding.


For example, a pattern might consist of elements containing circles and lines. Referring to FIG. 1, pattern 110 includes a circle 112 and two intersecting lines 114, 116; pattern 120 includes a circle 122 and a pair of lines 124, 126; and pattern 130 includes a circle 132 and a pair of lines 134, 136. Across the image set of trained images, the circles may vary in radius and the lines vary by thickness or number. This may be particularly so in the field of semiconductors or other materials in which a plurality of layers are deposited on a substrate, which can lead to distortion of features on each of the layers. The polarity of the patterns may also change throughout the image set (as shown in the difference between pattern 120 and pattern 130. The images may also contain a high degree of noise.


The problem has at least two components. First, the training image set consists of noisy images so it is difficult to train a clean model from a single image. Second, the pattern has different appearances in the training set which makes training a single model both difficult and prone to error at runtime.


SUMMARY OF THE INVENTION

To overcome the disadvantages of the prior art, the systems and methods herein use a pattern recognition and registration model to perform training. Illustratively, a pattern finding model is a single model trained from multiple training images. In some embodiments, composite models can be used to either improve robustness over standard pattern recognition and registration models, and/or to model small differences in appearance of a target region. To improve robustness composite models combine data from noisy (or otherwise distorted) training images showing instances of a single underlying pattern to build a single robust model. To achieve this, a training element using the pattern recognition and registration model uses the input images and a known relative position, or pose (this human-identified or computer-determined).


To account for small differences in appearance of a target region, a training method is employed to train a set of pattern recognition and registration models that span the entire range (or at least a large portion of the entire range) of appearances of the target pattern in the training set. The set of pattern recognition and registration models can manifest as either separate instances of pattern models or as a pattern multi-model. A pattern multi-model is a collection of pattern recognition and registration models. The underlying models can be standard pattern recognition and registration models or a composite pattern model, or a combination of the two. The pattern multi-model is intended for use in modeling targets whose appearance varies significantly. The multi-model can be run in various modes to take advantage of prior knowledge of the likely temporal sequence model appearances. The incorporation of multiple pattern models within the pattern multi-model framework can be used to reduce the amount of front-end processing, thus allowing for incremental performance gains over running separate pattern model instances. The pattern multi-model can also examine results from its component models to filter for overlap, for example if results from two models overlap by more than a user specified threshold, then the pattern multi-model may only return the better match (or higher scoring) result to the user.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention description below refers to the accompanying drawings, of which:



FIG. 1, already described, shows three exemplary images each including a pattern, according to a pattern recognition and registration procedure;



FIG. 2 is a schematic block diagram of an exemplary machine vision system for practicing the principles of the present invention in accordance with an illustrative embodiment;



FIG. 3 is a flow chart of a procedure for training a single pattern recognition and registration model, in accordance with the illustrative embodiments;



FIG. 4 is a flow chart of a procedure for training a pattern multi-model and measuring performance of a currently trained output model, in accordance with the illustrative embodiments;



FIG. 5 is a flow chart of a procedure for proposing and ranking candidates for addition of the output model collection, in accordance with the illustrative embodiments; and



FIG. 6 is flow chart of a procedure for proposing the highest scoring candidate to the user and outputting pattern multi-models, in accordance with the illustrative embodiments.





DETAILED DESCRIPTION


FIG. 2 is a schematic block diagram of a machine vision system 200 that may be utilized to practice the principles of the present invention in accordance with an illustrative embodiment. The machine vision system 200 includes a capturing device 205 that generates an image of an object 210 having one or more features 215. The capturing device 205 can comprise a conventional video camera or scanner. Such a video camera can be a charge coupled device (CCD) or other system for obtaining appropriate image information, such as the well-known CMOS sensors. Image data (or pixels) generated by the capturing device 205 represents an image intensity, for example, color or brightness of each point in the scene within the resolution of the capturing device 205. The capturing device 205 transmits a digital image data via a communications path 220 to an image analysis system 225. The image analysis system 225 can comprise a conventional digital data processor, such as the vision processing systems of the type commercially available from, for example, Cognex Corporation. The image analysis system 225 can comprise a conventional microcomputer or other exemplary computing device. Other forms of interfaces can be utilized, including, e.g., personal digital assistants (PDAs), etc. In alternative embodiments, the capturing device can include processing capabilities to perform the functions of the image analysis system. In such embodiments, there is not a need for a separate image analysis system. In further alternative embodiments, a capturing device can be operatively interconnected with an image analysis system for training purposes. Once training has occurred, an appropriate model or models can be stored in the capturing device for use during run time.


The image analysis system 225 can be programmed in accordance with the teachings of the present invention to find similar features among a plurality of images to generate appropriate recognition and registration information for training a machine vision system. The image analysis system 225 can have one or more central processing units (processors) 230, main memory 235, input/output systems 245 and one or more disk drives or other form of mass storage 240. Illustratively, the input/output system 245 interconnects with the communications path 220 between the capturing device 205 and the image analysis system 225. The system 225 can be configured by programming instructions in accordance with the teachings of the present invention to perform the novel multi-image trained pattern recognition and registration of the present invention. As will be appreciated by those skilled in the art, alternative hardware and/or software configurations can be utilized to implement the principles of the present invention. Specifically, the teachings of the present invention can be implemented in software, hardware, firmware and/or any combination thereof. Furthermore, during run-time, as opposed to training time, additional components can be included in the machine vision system 200. For example, objects 215 can be transported by a conveyor belt or other assembly line apparatus, etc.


In accordance with an illustrative embodiment of the present invention, the machine vision system 200 can be utilized to generate the training model for a run-time machine vision system. Thus, the machine vision system 200 can be utilized to generate a training model that can be utilized in a plurality of machine vision systems utilizing similar components.


Moreover, it should be noted that the pattern element (or pattern recognition and registration element) as shown and described herein, and their associated models, generally reside within the image analysis system 225. However, the placement and storage of the elements and models are highly variable within ordinary skill.


It should be noted that while the present invention is described in terms of a machine vision system 200, the principles of the present invention can be utilized in a variety of differing embodiments. As such, the term machine vision system should be taken to include alternative systems. More generally, the principles of the present invention can be implemented on any system that registers subpatterns in images. For example, one embodiment can involve a conventional machine vision system comprising of a stand alone camera operatively interconnected with a stand alone computer programmed to process images, etc. However, the principles of the present invention can be utilized in other devices and/or systems that register subpatterns in images. For example, a vision sensor, such as the Checker product available from Cognex Corporation, or other device that comprises illumination sources, image acquisition capabilities and/or processing capabilities. Such vision sensors can be trained and/or configured via separate modules, such as a Cognex Vision View. In such embodiments, the user can train the vision sensor using a plurality of parts, instead of a single part. The user can select a first part, place it in front of the sensor and indicate to the system that the training part is positioned. A second (third, etc.) part can be similarly trained. The user can control the training step using, e.g., a graphical user interface (GUI) and/or buttons or other control surfaces located on either the training module and/or the vision sensor itself. Furthermore, the functionality of the present invention can be incorporated into handheld devices, wireless compatible devices, etc. As such, the term machine vision system should be interpreted broadly to encompass all such systems and devices that can utilize one or more of the teachings of the present invention.


Training a Single Pattern Recognition and Registration Model

In accordance with the illustrative embodiments, a pattern recognition and registration model is trained from multiple images. Refer, for example, to U.S. Pat. No. 8,315,457, the disclosure of which is incorporated by reference as useful background information, for a more detailed description of training a single pattern recognition and registration model. Composite models can be used to either improve robustness over standard pattern models, or to model small differences in appearance of a target region. A training element implemented herein trains a set of pattern recognition and registration models that span the entire range of appearances of a target pattern in a set of training images. The set of models can be a single pattern recognition and registration model, or a collection of models termed herein a pattern “multi-model” element. The multi-model element is intended for use in modeling targets those appearance varies significantly. The multi-model can be run in various modes to take advantage of prior knowledge of the likely temporal sequence model appearances.


As used herein, the term “training element” (or training module) refers to the non-transitory embodiment of the steps carried out in generating a training model. The training element is part of a non-transitory computer program that contains one (or several) routines or functions that are dedicated to perform a particular task. Each element (or module) as shown and described herein, can be used alone or combined with other modules within the machine vision system. The training element creates the training model by training a set of models that span the entire range of training images contained in the database. Additionally, as used herein, the term “pattern recognition and registration model” or “pattern model” refers generally to the pattern models disclosed in the '457 patent, unless otherwise noted.


Reference is now made to FIG. 3 showing a flow chart of a procedure 300 performed by a training element for training a single pattern recognition and registration model, in accordance with the illustrative embodiments. At step 310, the initial input to the algorithm (which can be user-provided or computer-provided) is an initial training image and a region specifying the pattern to be trained (a “region of interest”) that can also be user-provided or computer-provided. The procedure 300 takes this input and at step 320 trains a first (initial) pattern recognition and registration (“PatMax”) model (P0) using training parameters at 325. Next, at step 330, the system iterates over the image set (at least a portion, or subset, of the remaining training images) running the pattern model P0, with the image set being provided by the user or by a computer having been previously stored in a database. The system can iterate the model over the entire remaining training image set or a portion of the remaining image set and stores the result scores, poses and matching region data. At step 340 the results are sorted in order of score (and if a ground truth data is available, in order of accuracy). The ground truth can be user-supplied or computer-generated. At step 350, the procedure inputs the top image (NC−1) (where NC is a parameter specifying the number of images should be input to the composite model training) and at step 360, trains a composite model using the pose and region information from the results previously generated in the running of P0.


As described in greater detail in U.S. Pat. No. 8,315,457, incorporated herein by reference as useful background information, multi-image training is performed for pattern recognition and registration. A machine vision system obtains a plurality of (“N”) training images. One image is selected and the other (N−1) images are then substantially registered to the selected image. The selection and registration is iterated so that each of the N images is utilized as the baseline image. By iterating for each of the N images as a baseline image, the procedure builds up a database of corresponded features that can be utilized in building a model of features that are stable among the images. Then features that represent a set of corresponding image features are added to the model. To build the database of corresponded features, each of the features can be corresponded using a boundary inspection tool or other conventional techniques to correspond contours in machine vision systems. Illustratively, those features selected for the model are those that minimize the maximum distance among the corresponding features in each of the images in which the feature appears. The feature to be added to the model can comprise an average of the features from each of the images in which the feature appears. The process continues until every feature that meets a threshold requirement is accounted for. The model that results from this process represents those stable features that are found in at least the threshold number of the N training images. This process (described in the '457 patent) identifies those features that are sufficiently supported by the evidence of the training images that they are stable features. The model can then be used to train an alignment, search or inspection tool with the set of features.


Referring back to FIG. 3, the user can supply additional composite model training parameters 355 that specify what fraction of the NC training images must contain a particular feature for it to be included in the output model. Illustratively, the fraction can be a percentage, such as 80% to 90%, but is highly variable within ordinary skill and depending upon the particular application. The user can also specify a proximity threshold for features from different training images to be considered a match.


Training a Pattern Recognition and Registration Multi-Model

Reference is now made to FIG. 4 showing a flow chart of a procedure 400 for training a pattern recognition and registration multi-model, and measuring performance of a currently trained output model, in accordance with the illustrative embodiments. At step 410, the initial inputs to the procedure (generally from a user, but can also be computer-provided) are: a training image (I0), a region R0 specifying the extent of a pattern within the image I0, the origin of the pattern (O0) within the training image I0 and a set of training images {I1, I2, . . . , IN} showing the range of appearances of the pattern of interest.


The procedure uses these inputs at step 420 to train a first “PatMax” pattern composite model (PCMOUT0) using the above-described procedure for training a single pattern recognition and registration model, shown in FIG. 3, according to composite model parameters 422. The training parameters 424 are used in training the output model (TPOUT), and are restrictive enough to ensure that the trained model will not produce high scoring false finds in a search over the full set of training images. If using the pattern recognition and registration multi-model framework, then PCMOUT0 will be added to the output multi-model, PMMOUT. If not using the multi-model framework, then it will be stored as the first pattern recognition and registration model of the output set (this is also called PMMOUT0 for descriptive purposes).


Next, at step 430, the procedure uses the same inputs (from 410) to train a different (second) pattern recognition and registration model PCMCAND0 using the previously described algorithm shown in FIG. 3 for a single pattern recognition and registration model. The pattern training parameters TPCAND 434 used in this process will also be those for training a model used exclusively for finding candidates for training further output composite models. These training parameters 434 should be more relaxed than those used to produce the output models. The governing premise is that PCMCAND0 is able to propose a more diverse range of training candidates than would be possible using the more restrictively trained PCMOUT0 but any false finds can be rejected by the user, or automatically based on known ground truths. As for the output model, PCMCAND0 can either be added to a pattern recognition multi-model PMMCAND or added to or stored in some other type of model collection.


Performance Measurement

At step 440, prior to commencing the process of finding pattern candidates and training those that are considered the “best” (highest score or match), the system must first measure the performance of the currently trained output model, i.e. PMMOUT. To measure the performance, the procedure runs the model over the entire test set of images and calculates a combined score which is initialized to 0. If PMMOUT finds the pattern in an image with a score (the score range is between 0 and 1) greater than a user-defined confidence threshold, then that score is added to the combined score. However, if PMMOUT fails to find the pattern in an image with a score greater than a user defined confidence threshold, then 1 is subtracted from the combined score. Other similar scoring functions can be implemented by those having ordinary skill and may incorporate a measure of alignment accuracy if ground truth data is available.


After performance measurement, the remaining steps of the procedure can be repeated iteratively, and are thus denoted with the variable ‘t’. Reference is now made to FIGS. 5 and 6, showing flow charts of procedures for proposing candidate models in accordance with the illustrative embodiments. With reference to FIG. 5, the procedure 500 is for proposing and ranking candidates for addition to the output model collection, PMMOUT(t). At 510, the inputs to iteration (t) include candidate and output multi-models, PMMCAND(t), PMMOUT(t) where

    • PMMCAND(t) contains {PCMCAND(0), PCMCAND(1), . . . , PCMCAND(t)} and
    • PMMOUT(t) contains {PCMOUT(0), PCMOUT(1), . . . , PCMOUT(t)}


At step 520 of the procedure, the candidate multi-model PMMCAND proposes and ranks candidates for addition to the output model collection PMMOUT(t). To accomplish this, the candidate pattern multi-mode, PMMCAND(t), is run on each training image Ii. If an acceptable result is returned (i.e., a location is found where the model scores higher than a user defined accept threshold), then at step 520 the matching region Ri and origin Oi are used to train a candidate pattern composite model PCMOUT(i) (as described hereinabove regarding training a single model for PMMOUT(t). The candidate composite model is therefore trained from the candidate region Ri, of the image Ii and the corresponding regions of the best matching Nc−1 images of that candidate image region (Ri of the image Ii).


At step 530 the procedure iterates through the set of candidate pattern composite models, and for each, first adds it to the output collection PMMOUT(t)→PMMOUT(t)′, then measures its performance in an identical way to that described hereinabove in Performance Measurement. After obtaining the score for the proposed expansion to the output multi-model, PMMOUT(t)′, PCMOUT(i) is removed from PMMOUT(t)′→PMMOUT(t). At step 534, the candidates are sorted (i.e. the PCMOUT(i)) according to these scores.


At the end of the procedure 500, the system has a collection of candidate pattern composite models at step 540 that cover all training images where PMMCAND (t) could find an acceptable result. The procedure ranks these models according to how much improvement in coverage each provides to the output pattern model collection (or multi-model) PMMOUT(t). If no candidates are found to improve the score by more than a user-defined amount, then a stopping criterion can be deemed to have been met.


Reference is now made to FIG. 6 showing a procedure 600 for proposing candidate models and outputting pattern multi-models, according to the illustrative embodiments. At step 620, the procedure proposes the highest scoring candidate to the user (for example, by displaying the region of interest of the candidate within the candidate image Ii). At step 622, the user can accept or reject the candidate, or equivalently the computer may be presented with a highest scoring candidate and the computer accepts or rejects the candidate based on a known ground truth. If the candidate is accepted, then at step 630 the user can be given the opportunity to adjust the origin of the new model, in case of slight alignment errors in the output of PMMCAND(t). If the candidate is rejected at step 624, the top candidate PCMOUT(top) is discarded and the system proposes the next candidate in the ordered list.


If the candidate is accepted, then at step 640 the accepted candidate model PCMOUT(accepted) is added to the current output model collection PMMOUT(T)→PMMOUT(t+1). The candidate finder model collection (or multi-model) now should desirably be updated with a similar model. At step 650, the candidate model PCMCAND(accepted) is trained from the region Raccepted, of the image Iaccepted, using training parameters TPCAND. PCMCAND(accepted) is now added to PMMCAND(t)→PMMCAND(t+1) at step 660. The outputs to iteration (t) at step 670 are the candidate multi-model PMMCAND(t+1) and the output multi-model PMMOUT(t+1).


The various illustrative embodiments provide for generation of a pattern recognition and registration model that is iterated over each training image of a plurality of training images to provide a model that spans (i.e. is valid over) the entire database of training images. This improves robustness and efficiency of the run-time system.


The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above can be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, as used herein the terms “process” and/or “processor” should be taken broadly to include a variety of electronic hardware and/or software based functions and components. Also, as used herein various directional and orientational terms such as “vertical”, “horizontal”, “up”, “down”, “bottom”, “top”, “side”, “front”, “rear”, “left”, “right”, and the like, are used only as relative conventions and not as absolute orientations with respect to a fixed coordinate system, such as gravity. Moreover, a depicted process or processor can be combined with other processes and/or processors or divided into various sub-processes or processors. Such sub-processes and/or sub-processors can be variously combined according to embodiments herein. Likewise, it is expressly contemplated that any function, process and/or processor herein can be implemented using electronic hardware, software consisting of a non-transitory computer-readable medium of program instructions, or a combination of hardware and software. Moreover, it is contemplated that some or all vision system processing tasks can be carried out either in the main module or in a remote processor (e.g. a server or PC) that is operatively connected through the interface module to the main module via a wired or wireless communication (network) link. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.

Claims
  • 1. A method for training a pattern recognition and registration model in a machine vision system, the method comprising the steps of: providing one or more initial training images having a region specifying a pattern to be trained, the one or more training images being provided from a database containing a plurality of training images;training a first pattern model from the one or more initial training images;iterating over remaining images from the one or more initial training images and selecting a subset of high scoring images from the remaining images as input to model training; andoutputting a trained pattern model that includes features common to a predetermined number of the plurality of training images, the trained pattern model being different from the first pattern model.
  • 2. The method as set forth in claim 1 wherein the step of iterating includes running the first pattern model to score each image.
  • 3. The method as set forth in claim 1 wherein the first pattern model is trained using a first set of training parameters and a second pattern model is trained using a second set of training parameters.
  • 4. The method as set forth in claim 1 wherein a metric used to score the remaining images comprises calculating a combined score which is initialized to zero, and if the pattern is found, using the first pattern model, in an image with a score greater than a user-defined confidence threshold, then that score is added to the combined score, and if the pattern is not found, using a first candidate pattern model, in an image with a score greater than the user-defined confidence threshold, then 1 is subtracted from the combined score.
  • 5. The method as set forth in claim 1 wherein each feature in the trained output pattern occurs in approximately 80%-90% of the training images.
  • 6. The method as set forth in claim 1 wherein the region specifying the pattern to be trained is given for each image by a predetermined ground truth.
  • 7. The method set forth in claim 6 but where the predetermined ground truth is found for each image by running the first pattern model.
  • 8. The method as set forth in claim 1 further comprising the step of training a second candidate pattern model having a second set of pattern training parameters, and iterating the second candidate pattern model over the remaining training images contained in the database and storing scores, poses and matching region data for the second candidate pattern model.
  • 9. The method as set forth in claim 1 wherein the step of training the first pattern model further comprises storing scores, poses and matching region data.
  • 10. The method as set forth in claim 1 wherein a first candidate pattern model comprises a composite model.
  • 11. The method as set forth in claim 1 wherein the one or more training images that are provided from the database are selected by a computer.
  • 12. The method as set forth in claim 1 wherein the trained pattern model is used in order to perform an alignment, a search or vision inspection tool in runtime operation of the machine vision system.
  • 13. The method as set forth in claim 1 wherein a pattern origin is specified as input to training the first pattern model, in addition to the training image and region.
  • 14. A system for generating pattern recognition and registration models, the system comprising: a memory having computer executable instructions stored therein, the memory further comprising a database containing a plurality of training images, at least one image having a region specifying a pattern to be trained;one or more processors that when executing the instructions are configured to: train an initial pattern recognition and registration model by iterating the initial pattern recognition and registration model over the plurality of training images, and stores scores, poses and matching region data to provide a trained model; andmeasure performance of the trained model over the plurality of training images.
  • 15. A system for training a pattern recognition and registration model in a machine vision system, comprising: a memory having computer executable instructions stored therein; andone or more processors that when executing the instructions are configured to: provide one or more initial training images having a region specifying a pattern to be trained, the one or more training images being provided from a database containing a plurality of training images;train a first pattern model from the one or more initial training images;iterate over remaining images from the one or more initial training images and selecting a subset of high scoring images from the remaining images as input to model training; andoutput a trained pattern model that includes features common to a predetermined number of the plurality of training images, the trained pattern model being different from the first pattern model.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application Ser. No. 61/841,142, filed Jun. 28, 2013, entitled A SEMI-SUPERVISED METHOD FOR TRAINING MULTIPLE PATMAX MODELS, the entire disclosure of which is herein incorporated by reference.

US Referenced Citations (332)
Number Name Date Kind
3069654 Hough Dec 1962 A
3560930 Howard Feb 1971 A
3816722 Sakoe et al. Jun 1974 A
3898617 Kashioka et al. Aug 1975 A
3899240 Gabor Aug 1975 A
3899771 Saraga et al. Aug 1975 A
3936800 Ejiri et al. Feb 1976 A
3986007 Ruoff Oct 1976 A
4115702 Nopper Sep 1978 A
4115762 Akiyama et al. Sep 1978 A
4146924 Birk et al. Mar 1979 A
4183013 Agrawala et al. Jan 1980 A
4200861 Hubach et al. Apr 1980 A
4213150 Robinson et al. Jul 1980 A
4295198 Copeland et al. Oct 1981 A
4441205 Berkin et al. Apr 1984 A
4441206 Kuniyoshi et al. Apr 1984 A
4441248 Sherman et al. Apr 1984 A
4567610 McConnell Jan 1986 A
4570180 Baier et al. Feb 1986 A
4581762 Lapidus et al. Apr 1986 A
4618989 Tsukune et al. Oct 1986 A
4637055 Taylor Jan 1987 A
4651341 Nakashima et al. Mar 1987 A
4672676 Linger Jun 1987 A
4685143 Choate Aug 1987 A
4688088 Hamazaki et al. Aug 1987 A
4707647 Coldren et al. Nov 1987 A
4736437 Sacks et al. Apr 1988 A
4746251 Yoshikawa et al. May 1988 A
4763280 Robinson et al. Aug 1988 A
4783826 Koso Nov 1988 A
4783828 Sadjadi Nov 1988 A
4783829 Miyakawa et al. Nov 1988 A
4799175 Sano et al. Jan 1989 A
4809348 Meyer et al. Feb 1989 A
4823394 Berkin et al. Apr 1989 A
4843631 Steinpichler et al. Jun 1989 A
4845765 Juvin et al. Jul 1989 A
4849914 Medioni et al. Jul 1989 A
4860374 Murakami et al. Aug 1989 A
4860375 McCubbrey et al. Aug 1989 A
4876457 Bose Oct 1989 A
4876728 Roth Oct 1989 A
4881249 Chautemps et al. Nov 1989 A
4893346 Bishop Jan 1990 A
4903313 Tachikawa Feb 1990 A
4922543 Ahlbom et al. May 1990 A
4955062 Terui Sep 1990 A
4959898 Landman et al. Oct 1990 A
4962541 Doi et al. Oct 1990 A
4972359 Silver et al. Nov 1990 A
4979223 Manns et al. Dec 1990 A
4980971 Bartschat et al. Jan 1991 A
4982438 Usami et al. Jan 1991 A
5003166 Girod Mar 1991 A
5020006 Sporon-Fiedler May 1991 A
5027417 Kitakado et al. Jun 1991 A
5033099 Yamada et al. Jul 1991 A
5040231 Terzian Aug 1991 A
5046109 Fujimori et al. Sep 1991 A
5048094 Aoyama et al. Sep 1991 A
5060276 Morris et al. Oct 1991 A
5072384 Doi et al. Dec 1991 A
5086478 Kelly-Mahaffey et al. Feb 1992 A
5111516 Nakano et al. May 1992 A
5113565 Cipolla et al. May 1992 A
5161201 Kaga et al. Nov 1992 A
5168530 Peregrim et al. Dec 1992 A
5177559 Batchelder et al. Jan 1993 A
5206917 Ueno et al. Apr 1993 A
5245674 Cass et al. Sep 1993 A
5253306 Nishio Oct 1993 A
5253308 Johnson Oct 1993 A
5265170 Hine et al. Nov 1993 A
5268999 Yokoyama Dec 1993 A
5272657 Basehore Dec 1993 A
5280351 Wilkinson Jan 1994 A
5313532 Harvey et al. May 1994 A
5343028 Figarella et al. Aug 1994 A
5343390 Doi et al. Aug 1994 A
5347595 Bokser Sep 1994 A
5351310 Califano et al. Sep 1994 A
5371690 Engel et al. Dec 1994 A
5384711 Kanai et al. Jan 1995 A
5398292 Aoyama et al. Mar 1995 A
5406642 Maruya Apr 1995 A
5459636 Gee et al. Oct 1995 A
5471403 Fujimaga Nov 1995 A
5471541 Burtnyk et al. Nov 1995 A
5481712 Silver et al. Jan 1996 A
5487117 Burges et al. Jan 1996 A
5495537 Bedrosian et al. Feb 1996 A
5497451 Holmes Mar 1996 A
5500906 Picard et al. Mar 1996 A
5513275 Khalaj et al. Apr 1996 A
5515453 Hennessey et al. May 1996 A
5524064 Oddou et al. Jun 1996 A
5537669 Evans et al. Jul 1996 A
5539841 Huttenlocher et al. Jul 1996 A
5541657 Yamamoto et al. Jul 1996 A
5544254 Hartley et al. Aug 1996 A
5545887 Smith et al. Aug 1996 A
5548326 Michael Aug 1996 A
5550763 Michael Aug 1996 A
5550937 Bell et al. Aug 1996 A
5555317 Anderson Sep 1996 A
5555320 Irie et al. Sep 1996 A
5557684 Wang et al. Sep 1996 A
5559901 Lobregt Sep 1996 A
5568563 Tanaka et al. Oct 1996 A
5570430 Sheehan et al. Oct 1996 A
5586058 Aloni et al. Dec 1996 A
5602937 Bedrosian et al. Feb 1997 A
5602938 Akiyama et al. Feb 1997 A
5613013 Schuette Mar 1997 A
5621807 Eibert et al. Apr 1997 A
5623560 Nakajima et al. Apr 1997 A
5625707 Diep et al. Apr 1997 A
5625715 Trew et al. Apr 1997 A
5627912 Matsumoto May 1997 A
5627915 Rosser May 1997 A
5631975 Riglet et al. May 1997 A
5633951 Moshfeghi May 1997 A
5638116 Shimoura et al. Jun 1997 A
5638489 Tsuboka Jun 1997 A
5640200 Michael Jun 1997 A
5650828 Lee Jul 1997 A
5657403 Wolff et al. Aug 1997 A
5663809 Miyaza et al. Sep 1997 A
5673334 Nichani et al. Sep 1997 A
5676302 Petry Oct 1997 A
5686973 Lee Nov 1997 A
5694482 Maali et al. Dec 1997 A
5694487 Lee Dec 1997 A
5703960 Soest Dec 1997 A
5703964 Menon et al. Dec 1997 A
5708731 Shimotori et al. Jan 1998 A
5717785 Silver Feb 1998 A
5751853 Michael May 1998 A
5754226 Yamada et al. May 1998 A
5757956 Koljonen et al. May 1998 A
5761326 Brady et al. Jun 1998 A
5768421 Gaffin et al. Jun 1998 A
5793901 Matsutake et al. Aug 1998 A
5796868 Dutta-Choudhury et al. Aug 1998 A
5815198 Vachtsevanos et al. Sep 1998 A
5822742 Alkon et al. Oct 1998 A
5825483 Michael et al. Oct 1998 A
5825913 Rostami et al. Oct 1998 A
5825922 Pearson et al. Oct 1998 A
5828769 Burns Oct 1998 A
5828770 Leis et al. Oct 1998 A
5835622 Koljonen et al. Nov 1998 A
5845007 Ohashi et al. Dec 1998 A
5845288 Syeda-Mahmood Dec 1998 A
5848184 Taylor et al. Dec 1998 A
5848189 Pearson et al. Dec 1998 A
5850466 Schott et al. Dec 1998 A
5850469 Martin et al. Dec 1998 A
5859923 Petry et al. Jan 1999 A
5861910 McGarry et al. Jan 1999 A
5862245 Renouard et al. Jan 1999 A
5864779 Fujimoto Jan 1999 A
5871018 Delp et al. Feb 1999 A
5875040 Matraszek et al. Feb 1999 A
5881170 Araki et al. Mar 1999 A
5890808 Neff et al. Apr 1999 A
5912984 Michael et al. Jun 1999 A
5912985 Morimoto et al. Jun 1999 A
5917733 Bangham Jun 1999 A
5926568 Chaney et al. Jul 1999 A
5930391 Kinjo Jul 1999 A
5933516 Tu et al. Aug 1999 A
5933523 Drisko et al. Aug 1999 A
5937084 Crabtree et al. Aug 1999 A
5940535 Huang Aug 1999 A
5943441 Michael Aug 1999 A
5943442 Tanaka Aug 1999 A
5950158 Wang Sep 1999 A
5953130 Benedict et al. Sep 1999 A
5974169 Bachelder Oct 1999 A
5974365 Mitchell Oct 1999 A
5978080 Michael et al. Nov 1999 A
5982475 Bruning et al. Nov 1999 A
5987172 Michael Nov 1999 A
5995648 Drisko et al. Nov 1999 A
5995953 Rindtorff et al. Nov 1999 A
6002793 Silver et al. Dec 1999 A
6005978 Garakani Dec 1999 A
6021220 Anderholm Feb 2000 A
6023530 Wilson Feb 2000 A
6026186 Fan Feb 2000 A
6026359 Yamaguchi et al. Feb 2000 A
6035006 Matui Mar 2000 A
6035066 Michael Mar 2000 A
6052489 Sakaue Apr 2000 A
6061086 Reimer et al. May 2000 A
6064338 Kobayakawa et al. May 2000 A
6064388 Reyzin May 2000 A
6064958 Takahashi et al. May 2000 A
6067379 Silver May 2000 A
6070160 Geary et al. May 2000 A
6078700 Sarachik Jun 2000 A
6081620 Anderholm Jun 2000 A
6111984 Fukasawa Aug 2000 A
6115052 Freeman et al. Sep 2000 A
6118893 Li Sep 2000 A
6122399 Moed Sep 2000 A
6128405 Fuji Oct 2000 A
6137893 Michael et al. Oct 2000 A
6141033 Michael et al. Oct 2000 A
6151406 Chang et al. Nov 2000 A
6154566 Mine et al. Nov 2000 A
6154567 McGarry Nov 2000 A
6173066 Peurach et al. Jan 2001 B1
6173070 Michael et al. Jan 2001 B1
6178261 Williams et al. Jan 2001 B1
6178262 Picard et al. Jan 2001 B1
6188784 Linker et al. Feb 2001 B1
6215915 Reyzin Apr 2001 B1
6226418 Miller et al. May 2001 B1
6226783 Limondin et al. May 2001 B1
6246478 Chapman et al. Jun 2001 B1
6272244 Takahashi et al. Aug 2001 B1
6272245 Lin Aug 2001 B1
6311173 Levin Oct 2001 B1
6324298 O'Dell et al. Nov 2001 B1
6324299 Sarachick et al. Nov 2001 B1
6345106 Borer Feb 2002 B1
6363173 Stenz et al. Mar 2002 B1
6381366 Taycher et al. Apr 2002 B1
6381375 Reyzin Apr 2002 B1
6385340 Wilson May 2002 B1
6396949 Nichani May 2002 B1
6408109 Silver et al. Jun 2002 B1
6421458 Michael et al. Jul 2002 B2
6424734 Roberts et al. Jul 2002 B1
6453069 Matsugu et al. Sep 2002 B1
6457032 Silver Sep 2002 B1
6462751 Felser et al. Oct 2002 B1
6466923 Young et al. Oct 2002 B1
6516092 Bachelder et al. Feb 2003 B1
6529852 Knoll et al. Mar 2003 B2
6532301 Krumm et al. Mar 2003 B1
6594623 Wang et al. Jul 2003 B1
6608647 King Aug 2003 B1
6625303 Young et al. Sep 2003 B1
6636634 Melikian et al. Oct 2003 B2
6639624 Bachelder et al. Oct 2003 B1
6658145 Silver et al. Dec 2003 B1
6681151 Weinzimmer et al. Jan 2004 B1
6687402 Taycher et al. Feb 2004 B1
6690842 Silver et al. Feb 2004 B1
6691126 Syeda-Mahmood Feb 2004 B1
6691145 Shibata et al. Feb 2004 B1
6714679 Scola et al. Mar 2004 B1
6728582 Wallack Apr 2004 B1
6748104 Bachelder et al. Jun 2004 B1
6751338 Wallack Jun 2004 B1
6751361 Wagman Jun 2004 B1
6760483 Elichai et al. Jul 2004 B1
6771808 Wallack Aug 2004 B1
6785419 Jojic et al. Aug 2004 B1
6836567 Silver et al. Dec 2004 B1
6850646 Silver Feb 2005 B1
6856698 Silver et al. Feb 2005 B1
6859548 Yoshioka et al. Feb 2005 B2
6870566 Koide et al. Mar 2005 B1
6901236 Saitoh et al. May 2005 B2
6903177 Seo et al. Jun 2005 B2
6909798 Yukawa et al. Jun 2005 B1
6950548 Bachelder et al. Sep 2005 B1
6959112 Wagman Oct 2005 B1
6963338 Bachelder et al. Nov 2005 B1
6973207 Akopyan et al. Dec 2005 B1
6975764 Silver et al. Dec 2005 B1
6985625 Silver et al. Jan 2006 B1
6993192 Silver et al. Jan 2006 B1
7006669 Lavagnino et al. Feb 2006 B1
7006712 Silver et al. Feb 2006 B1
7016539 Silver et al. Mar 2006 B1
7043055 Silver May 2006 B1
7043081 Silver et al. May 2006 B1
7058225 Silver et al. Jun 2006 B1
7065262 Silver et al. Jun 2006 B1
7068817 Bourg et al. Jun 2006 B2
7088862 Silver et al. Aug 2006 B1
7119351 Woelki Oct 2006 B2
7139421 Fix et al. Nov 2006 B1
7164796 Silver et al. Jan 2007 B1
7190834 Davis Mar 2007 B2
7239929 Ulrich et al. Jul 2007 B2
7251366 Silver et al. Jul 2007 B1
7260813 Du et al. Aug 2007 B2
7853919 Huang et al. Dec 2010 B2
8081820 Davis et al. Dec 2011 B2
8131063 Xiao et al. Mar 2012 B2
8144976 Shiell et al. Mar 2012 B1
8229222 Silver et al. Jul 2012 B1
8244041 Silver et al. Aug 2012 B1
8249362 Silver et al. Aug 2012 B1
8254695 Silver et al. Aug 2012 B1
8265395 Silver et al. Sep 2012 B1
8270748 Silver et al. Sep 2012 B1
8295613 Silver et al. Oct 2012 B1
8315457 Bogan et al. Nov 2012 B2
8320675 Silver et al. Nov 2012 B1
8331673 Silver et al. Dec 2012 B1
8335380 Silver et al. Dec 2012 B1
8363942 Silver et al. Jan 2013 B1
8363956 Silver et al. Jan 2013 B1
8363972 Silver et al. Jan 2013 B1
8457390 Barker et al. Jun 2013 B1
8891858 Preetham et al. Nov 2014 B1
20020054699 Roesch et al. May 2002 A1
20040081346 Louden et al. Apr 2004 A1
20050117801 Davis et al. Jun 2005 A1
20060110063 Weiss May 2006 A1
20090089736 Huang et al. Apr 2009 A1
20090096790 Wiedemann et al. Apr 2009 A1
20090185715 Hofhauser et al. Jul 2009 A1
20090290788 Bogan et al. Nov 2009 A1
20100146476 Huang et al. Jun 2010 A1
20110150324 Ngan Jun 2011 A1
20110202487 Koshinaka Aug 2011 A1
20120029289 Kucklick Feb 2012 A1
20120054658 Chuat Mar 2012 A1
20130018825 Ghani Jan 2013 A1
20130214280 Sato Aug 2013 A1
20130279800 Ranganathan Oct 2013 A1
20140198980 Fukui et al. Jul 2014 A1
Foreign Referenced Citations (10)
Number Date Country
4406020 Jun 1995 DE
0265302 Sep 1987 EP
63078009 Apr 1988 JP
6-95685 Apr 1994 JP
06160047 Jun 1994 JP
3598651 Dec 2004 JP
9511491 Apr 1995 WO
9521376 Aug 1995 WO
9522137 Aug 1995 WO
9927456 Jun 1999 WO
Non-Patent Literature Citations (249)
Entry
Cognex Corporation, “Chapter 13 Golden Template Comparison,” Cognex 3000/4000/5000 Vision Tools, pp. 521-626 (2000).
Cognex Corporation, Chapter 7 CONPLAS, Cognex 3000/4000/5000 Programmable Vision Engines, Vision Tools, Revision 7.4, P/N 590-0136, pp. 307-340 (1996).
Cognex Corporation, Cognex 3000/4000/5000 Vision Tool, Revision 7.6, Chapter 4, Caliper Tool, 1996.
Cognex Corporation, Cognex 3000/4000/5000 Vision Tool, Revision 7.6, Chapter 5, Inspection, 1996.
Cognex Corporation, Cognex 3000/4400 SMD Tools Release 5.2, SMD 2, 1994.
Cognex Corporation, Cognex 4000/5000 SMD Placement Guidance Package, User's Manual Release 3.8.00, 1998.
Cognex Corporation, Cognex MVS-8000 Series, CVL Vision Tools Guide, pp. 25-136 Release 5.4 590-6271 (2000).
Cognex Corporation, Cognex MVS-80000 Series, GDE User's Guide, Revision 1.1, Apr. 7, 2000.
U.S. Appl. No. 61/008,900, filed Dec. 21, 2007 for System and Method for performing Multi-Image Training for Pattern Recognition and Registration, By Nathaniel Bogan, et al.
“Apex Model Object”, Cognex Corporation, acuWin version 1.5, 1997, pp. 1-17.
“Apex Search Object”, acuWin version 1.5, 1997, pp. 1-35.
“Apex Search Object Library Functions”, Cognex Corporation, 1998.
“Cognex 2000/3000/4000 Vision Tools”, Cognex Corporation, Chapter 2 Searching Revision 5.2 P/N 590-0103, 1992, pp. 1-68.
“Cognex 3000/4000/5000 Programmable Vision Engines, Vision Tools”, Chapter 1 Searching, Revision 7.4 590-1036, 1996, pp. 1-68.
“Cognex 3000/4000/5000 Programmable Vision Engines, Vision Tools”, Chapter 14 Golden Template Comparison, 1996, pp. 569-595.
“Description of Sobal Search”, Cognex Corporation, 1998.
“PCT Search Report for PCT/US2008/013823”, Jun. 19, 2009.
Alexander et al., “The Registration of MR Images Using Multiscale Robust Methods”, Magnetic Resonance Imaging, 1996, pp. 453-468, vol. 5.
Anisimov et al., “Fast Hierarchical matching of an arbitrarily oriented template”, Pattern Recognition Letters, vol. 14, No. 2, pp. 95-101 (1993).
Anuta, Paul E., “Spatial Registration of Multispectral and Multitemporal Digital Imagery Using Fast Fourier Transform Techniques”, IEEE Transactions on Geoscience Electronics, Oct. 1970, pp. 353-368, vol. GE-8, No. 4.
Araujo et al., “A Fully Projective Formulation for Lowe's Tracking Algorithm”, The University of Rochester Computer Science Department, pp. 1-41, Nov. 1996.
Ashburner et al., “Incorporating Prio Knowledge into Image Registration,” Neuroimage, vol. 6, No. 4, pp. 344-352 (1997).
Ashburner et al., “Nonlinear Spatial Normalization using Basis Functions,: The Welcome Depart. Of Cognitice Neurology”, Institute of Neurology, Queen Square, London, UK, pp. 1-34 (1999).
Ashburner et al., “Nonlinear Spatial Normalization using Basis Functions,”, Human Brain Mapping, vol. 7, No. 4, pp. 254-266 (1999).
Bachelder et al., “Contour Matching Using Local Affine Transformations”, Massachusetts Institute of Technology Artificial Intelligence Laboratory, A.I. Memo No. 1326 (Apr. 1992).
Baker, J, “Multiresolution Statistical Object Recognition”, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, pp. 1-6, (1994).205.
Baker, J., “Multiresolution Statistical Object Recognition”, master's thesis, Massachusetts Institute of Technology (1994).
Balkenius et al., “Elastic Template Matching as a Basis for Visual Landmark Recognition and Spatial Navigation”, AISB workshop on Spatial Reasoning in Mobile Robots and Animals, Jul. 1997, 1-10.
Balkenius et al., “The XT-1 Vision Architecture”, Symposium on Image Analysis, Lund University Cognitive Science, 1996, pp. 1-5.
Ballard et al., “Searching Near and Approximate Location”, Section 4.2, Computer Vision, 1982, pp. 121-131.
Ballard et al., “The Hough Method For Curve Detection”, Section 4.3, Computer Vision, 1982, pp. 121-131.
Ballard et al., “Generalizing the Hough Transform to Detect Arbitrary Shapes”, Pattern Recognition, vol. 13, No. 2 Pergamon Press Ltd. UK, 1981, pp. 111-122.
Belongie et al., “Shape Matching and Object Recognition Using Shape Contexts”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(24):509-522 (Apr. 2002).
Besl et al., “A Method for Registration of 3D Shapes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Feb. 1992, pp. 239-256, vol. 14, No. 2.
Bichsel, et al., “Strategies of Robust Object Recognition fo rthe Automatic Identification of Human Faces”, 1991, 1-157.
Bileschi et al., “Advances in Component-based Face Detection”, Lecture notes in Computer Science, Springer Verlag, New York, NY, vol. 2388, 2002, 135-143.
Blais et al., “Advances in Component-based Face Detection”, Lecture notes in Computer Science, Springer Verlag, New York, NY, vol. 2388, (2002), 135-143.
Bookstein, F L., “Principal Warps: Thin-Plate Splines and the Decomposition of Deformations”, IEEE Transactions on pattern Analysis and Machine Intelligence, IEEE Inc., New York, vol. 11, No. 6, Jun. 1, 1989.
Borgefors, Gunilla, “Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 10, No. 6, Nov. 1988.
Breuel, T, “Geometric Aspects of Visual Object Recognition”, Technical Report 1374, MIT Artificial Intelligence Laboratory, May 1992, pp. 1-173.
Brown, Lisa G., “A Survey of Image Registration Techniques”, ACM Computing Surveys, vol. 24, No. 4 Association for Computing Machinery, 1992, pp. 325-376.
Bruckstein et al., “Design of Shapes for Precise Image Registration”, IEEE Transaction on Information Theory, vol. 44, No. 7, Nov. 1998.
Bruzug et al., “Using an Entropy Similarity Measure to Enhance the Quality of DSA Images with an Algorithm Based on Template Matching”, Visualization in Biomedical Computer, pp. 235-240 (1996).
Caelli et al., “Fast Edge-Only Matching Techniques for Robot Pattern Recognition”, Computer Vision, Graphics and Image Processing 39, Academic Press, Inc., 1987, pp. 131-143.
Caelli et al., “On the Minimum Number of Templates Required for Shift, Rotation and Size Invariant Pattern Recognition”, Pattern Recognition, vol. 21, No. 3, Pergamon Press plc, 1988, pp. 205-216.
Chen et al., “Object Modeling by Registration of Multiple Range Images”, Image and Vision Computing, vol. 10, No. 3, pp. 145-155 (1992).
Chen et al., “Object Modeling by Registration of Multiple Range Images”, in IEEE ICRA, pp. 2724-2729 (1991).
Chew et al., “Geometric Pattern Matching under Euclidean Motion”, Computational Geometry, vol. 7, Issues 1-2, Jan. 1997, pp. 113-124, 1997 Published by Elsevier Science B.V.
Cognex Corporation, “Cognex Products on Sale as of one year before filing for U.S. Pat. No. 7,016,539, Jul. 12, 1997”.
Cognex Corporation, “description of Overlap in Cognex search tool and description of Overlap in Cnlpas Tool as of Jul. 12, 1997”.
Cognex Corporation, “Excerpted description of AcuFinder product of Mar. 1997 as written Mar. 1, 2012”, Mar. 1997.
Cootes et al., “Active Shape Models—Their Training and Application”, Computer Vision and Image Understanding, vol. 61, No. 1, Jan. 1995, 38-59.
Cox et al., “Predicting and Estimating the Accuracy of a Subpixel Registration Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence”, vol. 12, No. 8, Aug. 1990, 721-734.
Cox et al., “On the Congruence of Noisy Images to Line Segment Models”, International Conference on Computer Vision, pp. 252-258 (1988).
Crouzil et al., “A New Correlation Criterion Based on Gradient Fields Similarity”, Proceedings of the 13th International Conference on Pattern Recognition vol. I Track A, Computer Vision, 1996, pp. 632-636.
Dana et al., “Registration of Visible and Infrared Images”, pp. 1-12, vol. 1957.
Declerck et al., “Automatic Registration and Alignment on a Template of Cardiac Stress & Rest SPECT Images”, IEEE Proc of MMBIA 1996, pp. 212-221.
Defigu El Redo et al., “Model Based Orientation Independent 3-D machine Vision Techniques,”, IEEE Transactions on Aerospace and Electronic Systems, vol. 24, No. 5 Sep. 1988, pp. 597-607.
Dementhon et al., “Model-Based Object Pose in 25 Lines of Code”, International Journal of Computer Vision, 1995, pp. 123-141, Kluwer Academic Publishers, Boston, MA.
Dementhon et al., “Model-Based Object Pose in 25 Lines of Code”, Proceedings of the Second European Conference on Computer Vision, pp. 335-343 (1992).
Devernay, F., “A Non-Maxima Suppression Method for Edge Detection with Sub-Pixel Accuracy”, Institut National de Recherche en Informatique et en Automatique, No. 2724, Nov. 1995, 24 pages.
Dorai et al., “Optimal Registration of Multiple Range Views”, IEEE 1994, pp. 569-571.
Drewniok et al., “High-Precision Localization of Circular Landmarks in Aerial Images”, Proc. 17, DAGM-Symposium, Mustererkennung 1995, Bielfield, Germany, Sep. 13-15, 1995, pp. 594-601.
Duta et al., “Automatic construction of 2D shape models”, IEEE Transaction on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, May 1, 2001.
Eric et al., “On the Recognition of Parameterized 2D Objects, International Journal of Computer Vision”, (1988),353-372.
Eric et al., “On the Recognition of Parameterized 2D Objects, International Journal of Computer Vision”, 1988, 353-372.
Feddema et al., “Weighted Selection of Image Features for Resolved Rate Visual Feedback Control”, IEEE Transactions on Robitics and Automation, vol. 7 No. 1, Feb. 1991, pp. 31-47.
Feldmar et al., “3D-2D Projective Registration of Free-Form Curves and Surfaces”, Computer Vision and Image Understanding, vol. 65, No. 3, (Mar. 1997),403-424.
Fischer et al., “On the Use of Geometric and Semantic Models for Component-Based Building Reconstruction”, Institute for Photography, University of Bonn, pp. 101-119, 1999.
Fitzpatrick et al., “Handbook of Medical Imaging”, vol. 2: Medical image Processing and Analysis, SPIE Press, Bellingham, WA, 2000, 447-513.
Foley et al., “Introduction to Computer Graphics”, pp. 36-49 (1994).
Forsyth et al., “Invariant Descriptors for 3-D Object Recognition and Pose”, IEEE Transactions on Pattern Analysis and Machines Intelligence, vol. 13, No. 10, Oct. 1991, pp. 971-991.
Foster et al., “Attributed Image Matching Using a Minimum Representation Size criterion”, PhD. Thesis, Carnegie mellon University, 1987, pp. 1-142.
Foster et al., “Determining objection orientation from a single image using multiple information sources”, CMU-RI-TR-84-15, Jun. 1984, pp. 1-96.
Foster, Nigel, “Determining objection orientation using ellipse fitting”, SPIE vol. 521-Intelligent Robots and Computer Vision, 1985 pp. 34-43.
Gavrila et al., 3-D Model-Based Tracking of Human Upper Body Movement, A Multi-View Approach, Computer Vision Laboratory, 1995, 253-258.
Gavrila et al., “3-D Model-Based Tracking of Humans in Action”, A Multi-View Approach, Computer Vision Laboratory, 1997, 73-80.
Gavrila et al., “Multi-Feature Hierarchical Template Matching using Distance Transforms”, Daimler-Benz AG, Research and Technology, 6 pages, 1996.
Gdalyahu et al., “Self-Organization in Vision: Stochastic Clustering for Image Segmentation, Perceptual Grouping, and Image Database Organization”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Inc., New York, US, vol. 23, No. 10, Oct. 2001, Oct. 2001, 1053-1074.
Ge et al., “Surface-based 3-D image registration using the Iterative Closest Point Algorithm with a closest point transform”, Medical Imaging 1996: Image Processing, M. Loew, K. Hanson, Editors, Proc. SPIE 2710, pp. 358-367 (1996).
Geiger et al., “Dynamic Programming for Detecting, Tracking, an Matching Deformable contours”, IEEE, 1995, pp. 294-302.
Gennery, Donald B., “Visual Tracking of Known Three-Dimensional Objects”, International Journal of Computer Vision, 1992, 243-270.
Gorman et al., “Recognition of incomplete polygonal objections”, IEEE, pp. 518-522 1989.
Gottesfeld et al., “A Survey of Image Registration Techniques”, Department of Computer Science, Columbia University, New York, NY 10027, ACM Computing Surveys, vol. 24, No. 4, Dec. 1992.
Gottesfeld et al., “Registration of Planar Film Radiographs with Computed Tomography”, 1996 Workshop on mathematical Methods in Biomedical Image Analysis (MMBIE '96), pp. 42-51 (1996).
Grimson et al., “On the Sensitivity of the Hough Transform for Object Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12. No. 3, 1990, pp. 255-274.
Haag et al., “Combination of Edge Element and Optical Flow Estimates for 3D Model Based Vehicle Tracking in Traffic Images Sequences”, International Journal of Computer Vision, 1999, pp. 295-319.
Han et al., “An Edge-Based Block Matching Technique for Video Motion”, Image Processing Algorithms and Techniques II, 1991, pp. 395-408, vol. 1452.
Haralick et al., “Pose Estimation from Corresponding Point Data”, IEEE Trans. On Systems, Man and Cybernetics, vol. 19, No. 6, pp. 1426-1445, 1989.
Hashimoto et al., “An Edge Point template Matching Method for High Speed Difference Detection between Similar Images”, Industrial Electronics and systems Development Laboratory Mitsubishi Electric Corp., PRU, vol. 90, No. 3, (1990), 8 pages.
Hashimoto et al., “High Speed Template Matching Algorithm Using Information of Edge Points”, Trans IEICE Technical Report D-II, vol. J74-D-II, No. 10, pp. 1419-1427 (Oct. 1991).
Hashimoto et al., “High-Speed Template Matching Algorithm Using Contour Information”, Proc. SPIE, vol. 1657, pp. 374-385 (1992).
Hashimoto et al., “High-Speed Template Matching Algorithm Using Information of Contour Points”, Systems and Computers in Japan, 1992, pp. 78-87, vol. 23, No. 9.
Hauck et al., “Hierarchical Recognition of Articulated Objects from Single Perspective Views”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997. Proceedings., 1997, 1-7.
Hauck et al., “A Hierarchical World Model with Sensor—and Trask-Specific Features”, 8 pages 1996.
Havelock, David, “Geometric Precision in Noise-Fee Digital Images”, IEEE Transactions on Pattern Analysisi and Machine Intelligence, vol. II, No. 10, Oct. 1989.
Hicks et al., “Automatic landmarking for building biological shape models”, IEEE ICIP 2002, vol. 2, pp. 801-804, Sep. 22, 2002.
Hill et al., “Automatic landmark gerneration for point distribution models”, Proc. British Machine Vision Conference, vol. 2, pp. 429-438, 1994.
Hill et al., “Medical Image Registration”, Institute of Physics Publishing; Phys. Med. Biol. 46 (2001), pp. R1-R45.
Hill et al., “Voxel Similarity Measures for Automated Image Registration”, Prov. SPIE, vol. 2359, pp. 205-216 (1994).
Hill, John W., “Machines Intelligence Research Applied to Industrial Automation”, U.S. Department of Commerce, National Technical Information Service, SRI International Tenth Report, Nov. 1980.
Hirako, K, “Development of an automatic detection system for microcalcifications lesion in mammography”, Trans. IEICE Japan D-II, vol. J78-D-II, No. 9, pp. 1334-1343 (Sep. 1995).
Hirooka et al., “Hierarchical distributed template matching”, Proc. SPIE vol. 3029, p. 176-183 (1997).
Hoff et al., “Pose Estimation of Artificial Knee Implants in Fluoroscopy Images Using a Template Matching Technique”, Proc. of 3rd IEEE Workshop on Applications of Computer Vision, Dec. 2-4, 1996, 7 pages.
Holden et al., “Voxel Similarity Measures for 3D Serial MR Brain image Registration”, IEEE Transactions on Medical Imaging, vol. 19, No. 2 pp. 94-102 (2000).
Hoogs et al., “Model Based Learning of Segmentations”, pp. 494-499, IEEE, 1996.
Hsieh et al., “Image Registration Using a New Edge-Based Approach”, Computer Vision and Image Understanding, vol. 67, No. 2, 1997, pp. 112-130.
Hu et al., “Expanding the Range of Convergence of the CORDIC Algorithm”, IEEE Transactions on Computers vol. 40, No. 1, pp. 13-21 (Jan. 1991).
Hu, Y, “Cordic-Based VLSI Architectures for Digital Signal Processing”, IEEE Signal Processing Magazine, pp. 16-35, 1053-5888/92 (Jul. 1992).
Hugli et al., “Geometric matching of 3D objects assessing the range of successful initial configurations”, IEEE pp. 101-106, 1997.
Hung et al., “Subpixel Edge Estimation Using Geometrical Edge Models with Noise Miniatruization”, 1994, pp. 112-117.
Hutchinson et al., “A Tutorial on Visual Servo Control”, IEEE Transactions on Robotics and Automation, vol. 12, No. 5, Oct. 1996, 20 pages.
Huttenlocher et al., “A Multi-Resolution Technique for Comparing Images Using the Hausdorff Distance”, 1993 IEEE, pp. 705-706.
Huttenlocher, Daniel P., “Comparing Images using the Hausdorff Distance”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 9, Sep. 1993.
Jacobs, D.W., “The Use of Grouping in Visual Object Recognition”, MIT Artificial Intelligence Laboratory, Office of Naval Research, pp. 1-162, Oct. 1988.
Jahne et al., “Geissler, Handbook of Computer Vision and Applications”, vol. 2, Academic Press, (1990), Chapter 5, 43 pages.
Jain et al., “Object Matching Using Deformable Templates”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 3, Mar. 1996, 267-278.
Jain et al., “Machine Vision”, McGraw-Hill, 1995, 207 pages.
Jebara, T.S., “3D Pose Estimation and Normalization for Face Recognition”, Undergraduate Thesis, Department of Electrical Engineering, McGill University, May 1996, 138 pages.
Jiang et al., “A New Approach to 3-D Registration of Multimodality Medical Images by Surface Matching”, SPIE vol. 1808, pp. 196-213 (1992).
Jiang et al., “Image Registration of Multimodality 3-D Medical Images by Chamfer Matching”, Biomedical Image Processing and Three Dimensional Microscopy, SPIE vol. 1660, pp. 356-366 (1992).
Jokinen, O, “Area-Based Matching for Simultaneous Registration of Multiple 3-D Profile Maps”, CVIU, vol. 71, No. 3, pp. 431-447 (Sep. 1998).
Jokinen, O, “Building 3-D City Models from Multiple Unregistered Profile Maps”, First International Conference on Recent Advances in 3-D Digital Imaging and Modeling, pp. 242-249 (1997).
Jokinen, O., “Matching and modeling of multiple 3-D disparity and profile maps”, Ph.D Thesis Helsinki Univ. of Technology, Helsinki, Finland (2000).
Jokinen et al., “Relative orientation of two disparity maps in stereo vision”, 6 pages 1995.
Jordan, J, “Alignment mark detection using signed-contrast gradient edge maps”, Proc. SPIE vol. 1661, pp. 396-407 (1992).
Joseph, S. H., “Fast Optimal Pose Estimation for Matching in Two Dimensions”, Image Processing and its Applications, Fifth International Conference, 1995.
Kashioka et al., “A transistor Wire-Bonding System Utilizing Multiple Local Pattern Matching Techniques”, pp. 562-570 (1976).
Kawamura et al., “On-Line Recognition of Freely Handwritten Japanese Characters Using Directional Feature Densities”, IEEE, pp. 183-186, 1992.
Kersten et al., “Automatic Interior Orientation of Digital Aerial Images”, Photogrammetric Engineering & Remote Sensing, vol. 63, No. 8, pp. 1007-1011 (1997).
Kersten et al., “Experience with Semi-Automatic Aerotriangulation on Digital Photogrammetric Stations”, Great Lakes Conference on Digital Photogrammetry and Remote Sensing (1995).
Koller et al., “Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes”, International Journal of Computer Vision, 1993, pp. 257-281.
Kollnig et al., “3D Post Estimation by Directly Matching Polyhedral Models to Gray Value Gradients”, International Journal Computer Vision, 1997, pp. 283-302.
Kollnig et al., “3D Post Estimation by Fitting Image Gradients Directly to Polyhedral Models”, IEEE, 1995, pp. 569-574.
Kovalev et al., “An Energy Minimization Approach to the Registration, Matching and Recognition of Images”, Lecture Notes in Computer Science, vol. 1296, Proceedings of the 7th International Conference on Computer Analysis of Images and Patterns, pp. 613-620 (1997).
Kraut et al., “Comparison of Functional MR and H2 15o Positron Emission Tomography in Stimulation of the Primary Visual Cortex”, AJNR Am J Neuroradiol 16:, American Society of Neuroradiology, Nov. 1995, 2101-2107.
Lamdan et al., “Affine Invariant Model-Based Object Recognition”, IEEE Transactions on Robotics and Automation, Oct. 1990, pp. 578589, vol. 6, No. 5.
Lang et al., “Robust Classification of Arbitrary Object Classes Based on Hierarchical Spatial Feature-Matching”, Machine Vision and Applications, 1997, 123-135.
Lanser et al., “Moral—A Vision-Based Object Recognition System for Autonomous Mobile Systems”, 9 pages 1997.
Lanser et al., “Robust Video-Based Object Recognition Using CAD Models”, 8 pages 1995.
Lemieux et al., “A Patient-to-Computer-Tomography Image Registration Method Based on Digitally Reconstructed Radiographs”, Med. Phys., vol. 21, No. 11, pp. 1749-1760 (Nov. 1994).
Li et al., “A Contour-Based Approach to Multisensor Image Registration”, IEEE Transactions on Image Processing, Mar. 1995, pp. 320-334, vol. 4, No. 3.
Li et al., “On Edge Preservation in Multiresolution Images, Graphical Models and image Processing”, 1992, pp. 461-472, vol. 54, No. 6.
Lin et al., “On-Line CORDIC Algorithms”, IEEE Transactions on Computers, pp. 1038-1052. vol. 39, No. 8 1990.
Lindeberg, T, “Discrete Derivative Approximations with Scale-Space Properties: A Basis for Low-Level Feature Extraction”, Journal of Mathematical Imaging and Vision, 1993, pp. 349-376.
Lu, “Shape Registration Using Optimization for Mobile Robot Navigation”, Department of Computer Science, University of Toronto, 1995, 1-163.
Maes et al., “Multimodality Image Registration by maximization of Mutual Information”, IEEE Transactions on Medical Imaging, vol. 16, No. 2, Apr. 1997, pp. 187-198.
Maes et al., “Comparative evaluation of multiresolution optimization strategies for multimodality image registration by maximization of mutual information”, Medical Image Analysis, vol. 3, No. 4, pp. 373-386 (1999).
Maes, F., “Segmentation and Registration of Multimodal Medical Images”, PhD thesis Katholieke Universiteit Leuven (1998).
Maio et al., “Real-time face location on Gray-Scale Static Images, pattern Recognition”, The Journal of the Pattern Recognition Society, 2000, 1525-1539.
Makous, W, “Optimal Patterns for Alignment”, Applied Optics, vol. 13, No. 3, Mar. 1974, 6 pages.
Marchand et al., “A 2D-3D Model-Based Approach to Real-Time Visual Tracking”, Institut National de Recherche en Informatique et en Automatique, No. 3920, Mar. 2000, 33 pages.
Marchand et al., “Robust Real-Time Visual Tracking using a 2D-3D Model Based Approach”, IEEE 7 pages 1999.
Marchand et al., “A 2D-3D model-based approach to real-time visual tracking”, Image and Vision Computing vol. 19, Issue 13, Nov. 2001, 94155.
Masuda et al., “A robust Method for Registration and Segmentation of Multiple Range Images”, Computer Vision and Image Understanding, vol. 61, No. 3, May 1995, pp. 295-307.
Masuda et al., “Detection of partial symmetry using correlation with rotated reflected images”, Pattern Recognition, vol. 26, No. 88, pp. 1245-1253 (1993).
McGarry, John, Description of Acumen radius of inhibition, at least as early as Mar. 31, 1997.
McGarry, John, Description of AcuFinder boundary, at least as early as Mar. 31, 1997.
McGarry, John, Description of AcuFinder high resolution search, at least as early as Mar. 31, 1997.
McGarry, John, Description of AcuFinder models, at least as early as Mar. 31, 1997.
McGarry, John, Description of AcuFinder search results, at least as early as Mar. 31, 1997.
McGarry, John, Description of search zone, at least as early as Mar. 31, 1997.
McGarry, John, Description of AcuFinder search space sampling rate, at least as early as Mar. 31, 1997.
Medina-Mora, R., “An Incremental Programming Environment”, IEEE Transactions on Software Engineering, Sep. 1981, pp. 472-482, vol. DE-7, No. 5, 1992.
Mehrotra et al., “Feature-Based Retrieval of Similar Shapes”, Proceedings of the International Conference on Data Engineering, Vienna, IEEE Comp. Soc. Press, Vol COnf. 9, Apr. 19, 1993, 108-115.
Meijering et al., “Image Registration for Digital Subtraction Angiography”, International Journal of Computer Vision, vol. 31, No. 2, pp. 227-246 (1999).
Miller et al., “Template Based Method of Edge Linking Using a Weighted Decision”, IEEE, pp. 1808-1815, 1993.
Neveu et al., “Two-Dimensional Object Recognition Using Multi-resolution Models”, Computer Vision, Graphics, and Image Processing, 1986, 52-65.
Newman et al., “3D CAD-Based inspection I: Coarse Verification”, IEEE, 1992, pp. 49-52.
Oberkampf et al., “Iterative Pose Estimation Using Coplanar Feature Points”, Computer Vision and Image Understanding, vol. 63, No. 3, pp. 495-511 (1996).
Oberkampf et al., “Iterative Pose Estimation Using Coplanar Feature Points”, International Conference on Computer Vision and Pattern Recognition, pp. 626-627 (1993).
O'Gorman, Lawrence, “Subpixel Precision of Straight-Edged Shapes for Registration and Measurement”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 7, Jul. 1996, 6 pages.
Ohm, Jens-Rainer, “Digitale Bildcodierung”, Springer Verlag, Berlin 217580, XP0002303066, Section 6.2 Bewegungschatzung, 1995.
Olson, et al., “Automatic Target Recognition by Matching Oriented Edge Pixels”, IEEE Transactions on Image Processing, Jan. 1997, pp. 103-113, vol. 6, No. 1., 1997.
Pauwels, E. J. et al., “Finding Salient Regions in Images”, Computer Vision and Image Understanding, Academic Press, San Diego, CA, US, vol. 75, No. 1-2, Jul. 1999, 73-85.
Perkins, W.A. , “Inspector: A Computer Vision System that learns to Inspect Parts”, IEEE Transactions on Pattern Analysis and Machine Vision Intelligence, vol. PAMI-5, No. 6, Nov. 1983.
Plessey Semiconductors, Preliminary Information, Publication No. PS2067, May, 1986, 1-5.
Pluim et al., “Interpolation Artifacts in Mutual Information-Based Image Registration”, Computer Vision and Image Understanding 77, 211-232 (2000).
Pluim, J, “Multi-Modality Matching Using Mutual Information”, Master's thesis, Department of Computing Science, University of Groningen (1996).
Pluim et al., “Mutual information matching and interpolation artefacts”, Proc. SPIE, vol. 3661, (1999), 10 pages.
Pratt, William, “Digital Image Processing”, Sun Microsystems, Inc., pp. 651-673, 1978.
Ray, R., “Automated inspection of solder bumps using visual signatures of specular image-highlights”, Computer Vision and Pattern Recognition, Proceedings CVPR, 1989, 588-596.
Reuckert et al., “Nonrigid Registration of Breast MR Images Using Mutual Information”, Proceedings of the Medical Image Computing and Computer Assisted Intervention Society, pp. 1144-1152 (1998).
Rignot et al., “Automated Multisensor Registration: Requirements and Techniques”, Photogrammetric Engineering & Remote Sensing, vol. 57, No. 8, pp. 1029-1038 (1991).
Roche et al., “Generalized Correlation Ratio for Rigid Registration of 3D Ultrasound with MR Images”, Medical Image Computing and Computer-Assisted Intervention—MICCAI 2000, pp. 567-577 (2000).
Roche et al., “Multimodal Image Registration by Maximization of the Correlation Ratio”, Rapport de Recherche No. 3378, Unite de Recherche INRIA Sophia Antipolic, INRIA (Aug. 1998).
Roche et al., “The Correlation Ratio as a new Similarity Measure for Multimodal Image Registration”, Medical Image Computing and Computer Assisted Intervention—MICCAI'98 pp. 1115-1124 (1998).
Rosenfeld et al., “Coarse-Fine Template Matching”, IEEE Transactions on Systems, Man, and Cybernetics, 1997, pp. 104-107.
Rueckert et al., “Nonrigid Registration using Free-Form Deformations: Application to Breast MR Images”, IEEE Transactions on Medical Imaging, vol. 18, No. 8, pp. 712-721 (1999).
Rummel et al., “Work piece Recognition and Inspection by a Model-Based Scene Analysis System”, Pattern Recognition, 1984, pp. 141-148, vol. 17, No. 1.
Sakai et al., “Line Extraction and pattern Detection in a Photograph”, Pattern Recognition, 1969, 233-248.
Sanderson et al., “Attributed Image Matching Using a Minimum Representation Size Criterion,” IEEE 1989, pp. 360-365.
Scanlon et al., “Graph-Theoretic Algorithms for Image Segmentation”, Circuits and Systems, ISCAS '99 Proceedings of the 1999 IEEE International Symposium on Orlando, FL, IEEE, May 30, 1999, 141-144.
Schutz et al., “Recognition of 3-D Objection with a Closest Point Matching Algorithm”, Proc. Conference ISPRS intercommission workship, vol. 30, issue 5W1 (1995) 6 pages.
Sebastian et al., “Constructing 2D curve at lases”, Proceedings IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, 2000.
Seitz et al., “The Robust Recognition of Traffic Signs From a Moving Car”, 1991, 287-294.
Seitz, P., “The robust recognition of object primitives using local axes of symmetry”, Signal Processing, vol. 18, pp. 89-108 (1989).
Seitz, Peter , “Using Local Orientational Information as Image Primitive for Robust Object Recognition”, Visual Communications and Image Processing IV, 1989, pp. 1630-1639, vol. 1199.
Shekhar et al., “Multisensor image registration by feature consensus”, Pattern Recognition, vol. 32, No. 1, pp. 39-52 (1999).
Shi et al., “Normalized Cuts and Image Segmentation”, Computer Vision and Pattern Recognition, Proceedings, IEEE Computer Society Conference on San Juan, IEEE Comput. Soc., Jun. 17, 1997, 731-737.
Shi et al., “Normalized Cuts and Image Segmentation”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 22, No. 8, Aug. 2000, 888-905.
Steger, C, “An Unbiased Detector of Curvilinear Structures”, Technische universitat Munchen, Technical Report FGBV-96-03, Jul. 1996, 32 pages.
Stevens et al., “Precise Matching of 3-D Target Models to Multisensor Data”, IEEE Transactions on Image Processing, vol. 6, No. 1, Jan. 1997, pp. 126-142.
Stimets et al., “Rapid Recognition of Object Outlines in Reduced Resolution Images”, Pattern Recognition, 1986, pp. 21-33, vol. 19, No. 1.
Stockman et al., “Matching images to models for registration and object detection via clustering”, IEEE Transaction of Pattern Analysis and Machine Intelligence, IEEE Inc., New York, vol. PAMI-4, No. 3,, 1982.
Streilein et al., “Towards Automation in Architectural Photogrammetry: CAD Based 3D-Feature Extraction”, ISPRS Journal of Photogrammetry and Remote Sensing pp. 4-15, 1994.
Studholme et al., “An Overlap Invariant Entropy Measure of 3D Medical Image Alignment”, Pattern Recognition, The Journal of the Pattern Recognition Society, Pattern Recognition 32, 71-86 (1999).
Studholme, C, “Measures of 3D Medical Image Alignment”, PhD thesis, University of London (1997).
Suk et al., “New Measures of Similarity Between Two Contours Based on Optimal Bivarate Transforms”, Computer Vision, Graphics and Image Processing, 1984, pp. 168-182.
Sullivan et al., “Model-based Vehicle Detection and Classification using Orthographic Approximations”, The University of Reading, 10 pages 1996.
Sullivan et al., “Model-based Vehicle Detection and Classification using Orthographic Approximations”, Image and Vision Computing 15, 1997, pp. 649-654.
Sullivan, Neal, “Semiconductor Pattern Overlay”, Digital Equipment Corp., Advanced Semiconductor Development Critical Dimension Metrology and Process Control, Critical Reviews vol. CR52, 29 pages.
Tanaka et al., “Picture Assembly Using a Hierarchical Partial-Matching Technique”, IEEE Transactions on Systems, Man., and Cybernetics, vol. SMC-8, No. 11, pp. 812-819 (Nov. 1978).
Tangelder et al., “Measurement of Curved Objects Using gradient Based Fitting and CSG Models”, Commission V. Working Group 2, 8 pages.
Tanimoto, S.L., “Template Matching in Pyramids”, Computer Graphics and Image Processing, vol. 16, pp. 356-369 (1981).
Thevenaz et al., “Optimization of Mutual Information for Multiresolution Image Registration”, IEEE Transactions on Image Processing, vol. 9, No. 12, pp. 2083-2099 (Dec. 2000).
Tian et al., “Algorithms for Subpixel Registration”, Computer Vision Graphics and Image Processing 35, Academic Press, Inc., 1986, pp. 220-233.
Tretter et al., “A Multiscale Stochastic Image Model for Automated Inspection”, IEEE Transactions on Image Processing, vol. 2, No. 12, Dec. 1995, 1641-1654.
Turk et al., “Zippered Polygon Meshes for Range Images”, SIGGRAPH/ACM 1994, 8 pages.
Ullman, S, “Aligning pictorial descriptions: An approach to object recognition”, Cognition, vol. 32, No. 3, pp. 193-254, Aug. 1989.
Ullman et al., “Recognition by Linear Combinations of Models”, A.I. Memo No. 1152, Massachusetts Institute of Technology Artificial Intelligence Laboratory, 1989, 43 pages.
Umeyama, S, “Least Squares Estimation of Transformation Parameters Between Two Point Patterns”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, No. 2, pp. 119-152, 1994.
Valkenburg et al., “An Evaluation of Subpixel Feature Localisation Methods for Precision Measurement”, SPIE vol. 2350, 1994, 10 pages.
Van Herk et al., “Automatic three-dimensional correlation of CT-CT, CTMRI, and CT-SPECT using chamfer matching”, Medical Physics, vol. 21, No. 7, pp. 1163-1178 (1994).
Vosselman, G, “Interactive Alignment of Parameterised Object Models to Images”, Commission III, Workinig Group 3, 7 pages 1998.
Wachter et al., “Tracking persons in Monocular Image Sequences”, Computer Vision and Image Understanding, vol. 74, No. 3, Jun. 1999, pp. 174-192.
Wallack, Aaron, “Algorithms and Techniques for Manufacturing”, Ph.D. Thesis, University of California at Berkeley, 1995, Chapter 4, 93 pages.
Wallack, Aaron S., “Robust Algorithms for Object Localization”, International Journal of Computer Vision, May, 1998, 243-262.
Weese et al., “Gray-Value Based Registration of CT and MR Images by Maximization of Local Correlation”, Medical Image Computing and Computer-Assisted Intervention, MICCAT'98, pp. 656-664 (1998).
Wei et al., “Recognition and lnsprection of Two-Dimensional Industrial Parts Using Subpolygons”, Pattern Recognition, Elsevier, Kidlington, GB, vol. 25, No. 12, Dec. 1, 1992, 1427-1434.
Wells et al., “Multi-modal Volume Registration by maximization of Mutual Information”, Medical Image Analysis (1996) vol. 1, No. 1, pp. 35-51.
Wells et al., “Statistical Object Recognition”, Submitted to the Department of Electrical Engineering and Computer Science, Nov. 24, 1992, 1-177.
Wells, W, “Statistical Approaches to Feature-Based Object Recognition”, International Journal of Computer Vision, vol. 21, No.½, pp. 63-98 (1997).
Westling et al., “Object recognition by fast hypothesis generation and reasoning about object interactions”, 7 pages, 1996.
Whichello et al., “Document Image Mosaicing”, IEEE, 3 pages. 1998.
White et al., “Two Methods of Image Extension”, Computer Vision, Graphics, and Image Processing 50, 342-352 (1990).
Wilson, S, “Vector morphology and iconic neural networks”, IEEE Trans. Systems Man. Cybernet, 19(6):1636-1644 (1989).
Wong et al., “Sequential hierarchical scene matching”, IEEE Transactions on Computers, vol. C-27(4):359-366 (1978).
Worrall et al., “Pose Refinement of Active Models Using Forces in 3D”, 10 pages. 1994.
Wu et al., “Registration of a SPOT Image and a SAR Image Using Multiresolution Representation of a Coastline”, 10th International Conference of pattern Recognition, Jun. 16-21, 1990, 913-917.
Wunsch et al., “Registration of CAD-Models to Images by Iterative Inverse Perspective Matching”, German Aerospace Research Establishment—DLR Institute for Robotics and System Dynamics, Proceedings of the ICPR, (1996).
Xie et al., “A New Fuzzy Clustering Validity Criterion and its Application to Color Image Segmentation”, Proceedings of the International Symposium on Intelligent Control, New York, IEEE, Aug. 13, 1991, 463-468.
Yamada, Hiromitsu , “Map Matching-Elastic Shape Matching by Multi-Angled Parallelism”, Apr. 1990, pp. 553-561, vol. J73-D-II, No. 4.
Zhang, Z, “Iterative Point Matching for Registration of Free-Form Curves”, INRIA, Rapports de Recherche No. 1658, Programme 4, Robotique, Image et Vision, Unite De Recherche Inria-Sophia Antipolis (Mar. 1992).
Zhang, Z, “On Local Matching of Free-Form Curves”, British Machines Vision Conference, pp. 347-356 (1992).
Zhang, Z, “Iterative point matching for registration of free-form curves and surfaces”, IJCV, vol. 13, No. 2, pp. 119-152 (1994).
Zhang, Zhengyou, “Parameter estimation techniques: A tutorial with application to conic fitting”, Imag Vision Comput; Image and Vision computing; Elsevier Science Ltd, Oxford England, vol. 15, No. 1, Jan. 1, 1997.
Zhao et al., “A non-linear corresponder for 2D automated point distribution model constuction”, Control, Automation, Robotics and Vision Conference, China, Dec. 6, 2001.
Hou et al., “A New Multi-Scale Multi-Model Automatic Image Registration Algorithm”, Jul. 23, 2007, Published in: CN.
Related Publications (1)
Number Date Country
20150003726 A1 Jan 2015 US
Provisional Applications (1)
Number Date Country
61841142 Jun 2013 US