Aspects of the disclosure relate to machine vision. Other aspects relate to the refinement of search results of coarse search methods, and to certain tracking methods.
In machine vision systems, a run-time image is often searched for a pattern in the image using a known pattern called a model, a model image, or a template. A result of such a search is called a “pose”, which is a transformation that describes the n-dimensional position of the template that provides the closest match to the pattern sought in the run-time image. Thus, the pose maps points from a template (or model, or model image) to points in a run-time image.
It is known to perform a search as a two-step process, including a coarse search, followed by a fine search, the fine search sometimes being called “pose refinement”, because the result of the coarse search is a pose. For example, pose refinement is used in PatMax™, sold by Cognex Corporation, Natick Mass. In PatMax™, an initial affine transformation, produced by a coarse search mechanism, is refined by a pose refinement mechanism so as to provide a fine search result.
Affine transformations include translation, uniform scale, and rotation. However, PatMax™ cannot effectively search for patterns that require a non-linear transformation to map from the template to the run-time image. Examples of such non-linear transformations include: thin-plate spline, cone, cylinder, perspective, and polynomial.
There are search mechanisms that can effectively search for patterns that require a non-linear transformation to map from the template to the run-time image. For example, the search mechanism disclosed in U.S. Pat. No. 7,190,834, filed Jul. 22, 2003, entitled “Methods for Finding and Characterizing a Deformed Pattern in an Image” can effectively do such searching. However, the result from such searching as disclosed therein is a coarse search result, and such coarse search results require pose refinement for many applications that require a high degree of accuracy. Consequently, there is a need for a pose refinement mechanism that can effectively refine coarse non-linear search results.
Further, it is known that pose refinement can be used to perform tracking of a pattern in an image that is undergoing transformations over a sequence of images. However, since known pose refinement mechanisms are linear, being limited to performing only affine transformations, many important transformations of patterns in a sequence of run-time images cannot be tracked. In particular, since tracking of moving three-dimensional objects involves the perspective transformation, a form of non-linear transformation, this important transformation cannot be tracked over a sequence of run-time images using known methods.
Per one embodiment, a method is provided for refining a pose estimate of a model. The model is coarsely aligned with a run-time image, and it represents a two-dimensional model pattern. The pose estimate includes at least one pose estimate parameter. The model has a plurality of model features, and the run-time image has a plurality of run-time features. A given distance value is determined representing a given distance between a given one of the plurality of the model features mapped using a given pose estimate. A two-dimensional model description of the two-dimensional model pattern is provided. The two-dimensional model pattern is mapped using the given pose estimate to create a transformed version of the two-dimensional model pattern. The transformed version represents a non-linear movement of at least portions of the two-dimensional model pattern in a direction orthogonal to aplane of the two-dimensional model description.
Embodiments of the invention will be more fully understood by reference to the detailed description, in conjunction with the following figures, wherein:
A method of the disclosure takes as input a parameterized coarse pose, which can be found using one of a variety of search methods, both linear and non-linear. The method also takes as input a model, and a run-time image, both providing a plurality of edgelets. Edgelets of the model are then mapped into the run-time image using the coarse pose. Next, the method changes the parameters of the coarse pose incrementally so as to better align the mapped edgelets of the model with the edgelets of the run-time image. This is accomplished by modifying the parameters of the coarse pose such that the point-to-line distance between matched edgelet pairs averaged over all matched edgelet pairs is minimized. The point-to-line distance is the perpendicular distance of the location of the mapped model edgelet to a line co-linear to the run-time image edgelet. Pairs of matched edgelets are determined by determining the closest run-time edgelet for each mapped edgelet using a Euclidean distance calculation. The number of run-time edgelets considered for each mapped edgelet is determined by specifying a capture range in both Euclidean distance between edgelet locations, and angular distance between edgelet orientations.
Embodiments provide pose refinement of the coarse search results of linear, as well as non-linear search methods. Embodiments of disclosure are especially useful as a fine search stage for use with coarse search mechanisms so as to more effectively and accurately search for patterns that require a non-linear transformation to map from a template to a run-time image. Thus, the disclosed embodiments improve the repeatability and accuracy of coarse linear and non-linear search methods. The disclosed embodiments improve the modeling accuracy of a transformation, and can be used with any non-linear transformation, including perspective (three-dimensional poses), thin-plate spline (deformation), cylinder, cone, or any other parameterizable transformation, as well as any affine transformation, including translation, scale, and rotation. Thus, the embodiments can be used with a number of different transformations, employing a stable numerical solver which practically guarantees convergence upon a solution. Also, a method of the disclosure can refine coarse poses that are misaligned by many pixels. In addition, the disclosure can improve on rotation-invariant and scale-invariant search methods by compensating for non-linear distortions of a pattern in an image.
An embodiment can also perform tracking of a pattern in an image that is undergoing transformations over a sequence of images. In this capacity, the invention can track many important transformations of patterns in a sequence of run-time images, such as tracking moving three-dimensional objects using a perspective transformation, a form of non-linear transformation. Consequently, this important transformation can be tracked over a sequence of run-time image frames using the invention, by using the refined pose of a previous frame as the estimated pose for the next frame. The tracking aspect of the invention allows for fast tracking of patterns through a sequence of images.
Accordingly, a first general aspect of the invention is a method for refining a pose estimate of a model coarsely aligned with a run-time image, the pose estimate being characterized by at least one parameter, the model having a plurality of model edgelets, the run-time image having a plurality of run-time edgelets, and each edgelet having a position and an orientation. The method includes mapping the position and orientation of each model edgelet onto the run-time image using the pose estimate to provide a plurality of mapped edgelets. Then, for each mapped edgelet, all run-time features are found having a position within a distance capture range of the mapped edgelet, and having an orientation within an angular capture range of the mapped edgelet, so as to provide a correspondence list of run-time features of the mapped edgelet, thereby providing a plurality of correspondence lists. Next, for each mapped edgelet, a closest run-time edgelet is found within the correspondence list of the mapped edgelet. Then, for each mapped edgelet, a distance is found between the mapped edgelet and the closest run-time edgelet within the correspondence list of the mapped edgelet. Next, the at least one parameter of the pose estimate is modified so as to minimize an average over the plurality of mapped edgelets of the distance between each mapped edgelet and the closest run-time edgelet within the correspondence list of the mapped edgelet.
In a preferred embodiment, the distance is the point-to-line distance between the mapped edgelet and the closest run-time edgelet within the correspondence list of the mapped edgelet.
In another preferred embodiment, the at least one parameter of the pose estimate is modified so as to minimize an average over the plurality of mapped edgelets of the distance proceeds as an iterative loop. In a further preferred embodiment, the iterative loop is terminated when a number of iterations of the iterative loop reaches a maximum number of iterations. In an alternate embodiment, the iterative loop is terminated when the average over the plurality of mapped edgelets of the distance is less than an average distance threshold. In yet another alternate embodiment, the iterative loop is terminated when a change in the at least one parameter per iteration is less than a change threshold.
In another embodiment, the method also includes performing data-reduction on the model prior to mapping the position and orientation of each model edgelet. In a preferred embodiment, performing data-reduction includes chaining model edgelets, and discarding edgelets not included in a chain of edgelets. In a further preferred embodiment, performing data-reduction includes discarding every nth edgelet, where n is an integer selected so as to reduce computation overhead while preserving sufficient accuracy for an application to which the method is applied.
In another preferred embodiment, the plurality of model edgelets are obtained by first sub-sampling a model image to provide a sub-sampled model image, and then edge detecting the sub-sampled model image to provide a model having a plurality of edgelets.
In yet another preferred embodiment, modifying the at least one parameter of the pose estimate includes computing a search direction in pose parameter space; and incrementing the pose parameter in the search direction in pose parameter space. In a further preferred embodiment, the search direction is in the direction of one of: gradient and robust gradient.
In another preferred embodiment, the distance capture range of the mapped edgelet is sized so as to capture some run-time image edgelets in portions of a run-time image having an edge, and so as not to capture any run-time image edgelets in portions of the run-time image not having an edge.
In other preferred embodiments, the average is an arithmetic average, or a root mean squared average. In yet other preferred embodiments, the pose estimate is a non-linear pose estimate. In a further preferred embodiment, the non-linear pose estimate is a non-linear transformation selected from the group including: perspective, cylinder, cone, polynomial, and thin-plate spline.
In another preferred embodiment, modifying the at least one parameter of the pose estimate so as to minimize an average over the plurality of mapped edgelets of the distance proceeds as an iterative loop, the iterative loop using only run-time image edgelets that are located within a consider range of each mapped edgelet that was mapped by the pose estimate prior to the iterative loop. In a further preferred embodiment, each iteration of the iterative loop uses only run-time edgelets that are within a capture range of each mapped edgelet that was mapped by a current estimate of the pose, and the capture range is sized smaller than the consider range so as to effectively reduce the influence of outliers and spurious run-time image edgelets.
Another general aspect of the invention is a method for refining a pose estimate of a model coarsely aligned with a run-time image, the pose estimate being characterized by at least one parameter. The method includes providing a pose estimate to be refined, the pose estimate being characterized by at least one parameter; extracting edgelets from a model image so as to provide a model having a plurality of model edgelets, each model edgelet having a position and an orientation; extracting edgelets from the run-time image so as to provide a plurality of run-time edgelets, each run-time edgelet having a position and an orientation; mapping the position and orientation of each model edgelet onto the run-time image using the pose estimate to provide a plurality of mapped edgelets; pairing each mapped edgelet with a run-time edgelet to provide a plurality of edgelet pairs; for each edgelet pair, finding a distance between the mapped edgelet and the run-time edgelet paired with the mapped edgelet; and modifying at least one parameter of the pose estimate so as to minimize an average over the plurality of edgelet pairs of the distance between the mapped edgelet and the run-time edgelet paired with the mapped edgelet.
In a preferred embodiment, the run-time edgelet of an edgelet pair is selected from a plurality of run-time edgelets. In another preferred embodiment, pairing each mapped edgelet with a run-time edgelet includes finding all run-time features having a position within a distance capture range of the mapped edgelet, and having an orientation within an angular capture range of the mapped edgelet, so as to provide a correspondence list of run-time features of the mapped edgelet, thereby providing a plurality of correspondence lists; and finding a closest run-time edgelet within the correspondence list of the mapped edgelet.
In another preferred embodiment, extracting features from the model image includes sub-sampling the model image to provide a sub-sampled model image, and detecting edges in the sub-sampled model image to provide a model having a plurality of edgelets.
Another general aspect of the invention is a method for refining a non-linear pose estimate of a model coarsely aligned with a run-time image, the non-linear pose estimate being characterized by at least one parameter, the model having a plurality of model edgelets, the run-time image having a plurality of run-time edgelets, each edgelet having a position and an orientation. This method includes modifying the at least one parameter of the pose estimate so as to minimize an average distance taken over a plurality of model edgelets mapped by the pose estimate, the distance being the distance between each model edgelet mapped by the pose estimate, and a corresponding run-time edgelet.
In a preferred embodiment, the corresponding run-time edgelet is the run-time edgelet that is closest to the model edgelet mapped by the pose estimate. In an alternate preferred embodiment, the corresponding run-time edgelet is the run-time edgelet that is closest to the model edgelet mapped by the pose estimate, and also falls within a capture range. In a further preferred embodiment, the capture range includes both a distance capture range, and an angle capture range. In another preferred embodiment, each corresponding edgelet is included in a correspondence list. In a further preferred embodiment, the correspondence list is a list of lists. In a preferred embodiment, modifying the at least one parameter of the pose estimate is performed iteratively.
Another general aspect of the invention is a method for tracking the motion of a pattern in an image undergoing a non-linear deformation over a sequence of images. This method includes providing a current pose of a model aligned with a first image of the sequence, the current pose being a non-linear transformation characterized by at least one parameter; providing a second image of the sequence of images, the second image having a plurality of second image edgelets, each second image edgelet having a position and an orientation; mapping the position and orientation of each model edgelet onto the second image using the current pose of the model in the first image to provide a plurality of mapped edgelets; for each mapped edgelet, finding all second image features having a position within a distance capture range of the mapped edgelet, and having an orientation within an angular capture range of the mapped edgelet, so as to provide a correspondence list of run-time features of the mapped edgelet, thereby providing a plurality of correspondence lists; for each mapped edgelet, finding a closest run-time edgelet within the correspondence list of the mapped edgelet; for each mapped edgelet, finding a distance between the mapped edgelet and the closest second image edgelet within the correspondence list of the mapped edgelet; and modifying the at least one parameter of the current pose so as to minimize an average over the plurality of mapped edgelets of the distance between each mapped edgelet and the closest second image edgelet within the correspondence list of the mapped edgelet, thereby providing an updated pose.
In a preferred embodiment, the distance is the point-to-line distance between the mapped edgelet and the closest second image edgelet within the correspondence list of the mapped edgelet. In another preferred embodiment, modifying the at least one parameter proceeds as an iterative loop. In a further preferred embodiment, the iterative loop is terminated when a number of iterations of the iterative loop reaches a maximum number of iterations. In another further preferred embodiment, the iterative loop is terminated when the average over the plurality of mapped edgelets of the distance is less than an average distance threshold. In yet another further preferred embodiment, the iterative loop is terminated when a change in the at least one parameter per iteration is less than a change threshold.
In another embodiment, the method further includes performing data-reduction on the model prior to mapping the position and orientation of each model edgelet. In a further preferred embodiment, performing data-reduction includes chaining model edgelets, and discarding edgelets not included in a chain of edgelets. In another further preferred embodiment, performing data-reduction includes discarding every nth edgelet, where n is an integer selected so as to reduce computation overhead while preserving sufficient accuracy for an application to which the method is applied.
In a preferred embodiment, the plurality of model edgelets are obtained by first sub-sampling a model image to provide a sub-sampled model image, and then edge detecting the sub-sampled model image to provide a model having a plurality of edgelets.
In another preferred embodiment, modifying the at least one parameter of the pose estimate includes computing a search direction in pose parameter space, and incrementing the pose parameter in the search direction in pose parameter space. In a further preferred embodiment, the search direction is in the direction of one of: gradient and robust gradient.
In another preferred embodiment, the distance capture range of the mapped edgelet is sized so as to capture some run-time image edgelets in portions of a run-time image having an edge, and so as not to capture any run-time image edgelets in portions of the run-time image not having an edge.
In still other preferred embodiment, the average is an arithmetic average, or a root mean squared average.
In yet another preferred embodiment, the pose estimate is a non-linear pose estimate. In a further preferred embodiment, the non-linear pose estimate is a non-linear transformation selected from the group including: perspective, cylinder, cone, polynomial, and thin-plate spline.
With reference to
Further data reduction can be performed upon the list of edgelets, such as simply discarding every other edgelet, or some other systematic scheme for retaining and/or discarding edgelets. Also, more intelligent methods for discarding edgelets can be employed, such as first chaining the edgelets, and then discarding chains that are shorter than a minimum length. Such data reduction can improve the speed, without impairing the accuracy of the method. There are many other methods for discarding edgelets known in the art of machine vision that will improve the speed, without appreciably degrading accuracy of the method. The method used will depend somewhat on the particular application.
The result of the sub-sampling of a model image, edge detection, and data reduction is a list of edgelets that, taken together, can be used as a model.
Alternatively, a model can be a list of edgelets obtained in some other way, such as by a model synthesizer that can create models of shapes specified by one or more parameters, and derive a list of model edgelets corresponding to the shape, for example. Or, a list of model edgelets can simply be provided.
Details of feature extraction 104 are shown in
Similarly, at run-time 106, a run-time image 108 is acquired, such as by using a machine vision camera. As was done for the acquired training image 102, features are extracted 110. Preferably, the features are edgelets, and the result of the feature extraction 110 is a list of run-time edgelets. As in the training phase, sub-sampling prior to edge detection, and data reduction by systematically reducing the number of run-time edgelets can improve speed and/or accuracy of the method.
Referring to
In the next phase, called the corresponder 114, the list of model edgelets, the list of run-time edgelets, and a parameterized coarse pose 112 are used to create a correspondence list, which is the output of the corresponder 114. The correspondence list is actually a list of lists, which will now be explained.
A first step in the corresponder 114 is the step 116 wherein each model (training) edgelet is mapped using the parameterized coarse pose 112. This step 116 is illustrated in
A second step in the corresponder 114 is the step 118. In this step 118, for each mapped edgelet 505 in the plurality of mapped edgelets 504, all run-time features 406 are found that have a position within a position capture range of the mapped edgelet 505, and an orientation within an angle capture range of the mapped edgelet 505. This is best understood by reference to
In
A “correspondence list” is actually a list of lists, one list for each mapped edgelet. The contents of each list associated with a mapped edgelet is a plurality of run-time edgelets. Each run-time edgelet on the list falls within both the position capture range 608, and the angle capture range (not shown). Any run-time edgelet that falls outside either the position capture range 608 of the mapped edgelet 604, or the angle capture range (not shown) of the mapped edgelet 604, does not get included in the list associated with the mapped edgelet 604. Thus, the entire correspondence list is a collection of all the lists associated with all the mapped edgelets. This list of lists is the output of step 118 that is provided to the pose estimator 120.
In the pose estimator 120 of
In step 124 of
The close-up view 802 again shows a single mapped edgelet 806 and the run-time edgelets within the capture range 804 that are included on a list of the correspondence list. The run-time edgelet 808 is selected as being the closest to the mapped edgelet 806.
The list-of-lists nature of the correspondence list is illustrated in the close-up view 810, where a plurality of capture ranges 812, 814, 816, 818, and 820 are shown, each capture range resulting in a list of run-time edgelets to be included in the correspondence list. Note also in
In the numerical parameter solver 126, the parameters of the current estimate of the pose of 122 are modified so as to minimize an average, taken over all the pairs, of the distance between each mapped edgelet's position and the line passing through its closest run-time edgelet that was selected in step 124.
In
The point-to-line distance dj can be described by Equation 1:
dj=Dist(Pose(Φ)*pj,linej)
={circumflex over (n)}j·p′j−bj
where n^j is the normal of the jth line, p′j is the mapped model point, and bj is the distance of the jth line from the origin.
(Collecting these distances in a vector d results in Equation 2:
d=└dj┘
Next, in step 128, if the change in the parameters of the pose falls below a minimum threshold (here called TOL), or if the number of iterations (here called NITER) reaches a maximum threshold, or if the average distance (or other function of the point-to-line distances) falls below a threshold, then the current estimate of the pose is returned and the method terminates 130. The change in the parameters of the pose can be calculated as the change in the sum of the parameters of the pose, such as the root-mean-square sum, or the simple sum, or the average, for example.
Referring to
In general, an error metric is some function of the distance d:
error=Function(d)
The search direction dX is found at step 206 by taking a partial derivative of the error with respect to the ith parameter of the pose:
In the case of the sum of squared distances error metric we define the error to be:
Using Equation A we get the following result:
In the case of the thresholded distances metric, a cap is placed on the maximum distance that a point can be from a line, and define the error to be:
Using equation (A) again we get the following result:
After the search direction has been computed at 206, the parameter space in the computed search direction is searched at 208 until the number of iterations exceeds a threshold, or until the error metric falls below a threshold 220, and the method thereby converges 222, whereupon the refined parameterized pose is returned 224, or until the method determines that a local minimum in the parameter space has been reached 230, at which point the parameterized pose can also be returned. Thresholds are application-dependent, and are therefore determined empirically to achieve desired results.
To search the parameter space in the direction of the computed search, a value for lambda is selected at step 210, and that value for lambda is multiplied by dX, i.e., JTd, and then added to the current pose estimate X to get the new current pose estimate X′. Lambda can start at 2−4, for example.
Next, each model edgelet point is mapped 212 using the new current estimate X′ of the pose. Then, the average point-to-line error metric is computed 214, such as by computing the average distance. In some applications, it is useful to exclude outliers when computing distance.
If the error metric has decreased due to the change in lambda at 210, then lambda is multiplied by 2 at step 218. Else, lambda is divided by 2 at step 226 If lambda falls below a threshold, or if some number (e.g., 10) of iterations has been reached, as determined at 228, then a local minimum has been reached.
If the error metric has been decreased such that the number of iterations exceeds a threshold, or such that the error metric falls below a threshold 220, the method is deemed to have converged 222, whereupon the refined parameterized pose is returned 224.
Step 206 and the steps of 208 represent a method of searching pose parameter space so as to minimize an aggregate distance metric over all pairs of mapped model points and run-time edgelets. Other approaches to computing an aggregate distance would achieve the same result, as would other methods of minimizing the aggregate distance metric so as to provide a refined parameterized pose.
To further improve performance and robustness, not all run-time image edgelets are used in the refinement loop of 208. Instead, the only run-time image edgelets that are used are those located within a “consider range” of each mapped edgelet that was mapped by the pose estimate prior to refinement by the iterative loop. In addition, each iteration of the iterative loop uses only run-time edgelets that are within a “capture range” of each mapped edgelet that was mapped by a current estimate of the pose.
To effectively reduce the influence of outliers and spurious run-time image edgelets, the capture range is sized smaller than the consider range. Using both a consider range and a smaller capture range allows the pose to attract to far away features, while not being effected by outliers and spurious edgelets. For example, the consider range could be 10-20 pixels, while the capture range could be 5 pixels. This is another example of how data reduction can improve the performance and robustness of the disclosed method, but other methods of data reduction, whether in the alternative to, or in addition to, those discussed herein, can also improve the performance and robustness of the method.
Embodiments have been discussed in the context of pose refinement, and particularly in the context of non-linear pose refinement. This can open up significant new applications, such as tracking. One reason for this is that the perspective transformation which is used when tracking three-dimensional objects is a non-linear transformation. So, the disclosed embodiment method for refining a non-linear pose can be easily adapted for tracking the motion of objects in three-dimensional space. In this case, the motion of an object is represented by a sequence of image frames, much like a sequence of frames of movie film, where an image is captured once ever thirtieth of a second, for example.
Application of a pose refinement method of the disclosure to tracking an object as it deforms or experiences movement relative to a camera may be accomplished in an extremely efficient manner by using the refined pose of a previous frame as the pose estimate initial estimate for the next frame.
Note that tracking refers to a situation where there is a sequence of images of the same object, either from different views, or from the same view as the object undergoes deformation. Thus, the different views can be due to motion in three-dimensions of either the camera or the object, resulting in a changing perspective image. Also note that the method of the disclosure is advantageous when each successive frame is substantially similar to the previous frame, i.e. the deformation or perspective change that occurs between each pair of frames in an image sequence is not too large, even if the total deformation or perspective change over the entire sequence is large. Tracking according to the disclosure can be very efficient and fast, and is useful in robotics, security, and in control applications.
Other modifications and implementations will occur to those skilled in the art without departing from the spirit and the scope of the invention as claimed. Accordingly, the above description is not intended to limit the invention except as indicated in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
3069654 | Hough | Dec 1962 | A |
3560930 | Howard | Feb 1971 | A |
3898617 | Kashioka et al. | Aug 1975 | A |
3899240 | Gabor | Aug 1975 | A |
3899771 | Saraga et al. | Aug 1975 | A |
3936800 | Ejiri et al. | Feb 1976 | A |
3986007 | Ruoff | Oct 1976 | A |
4146924 | Birk et al. | Mar 1979 | A |
4200861 | Hubach et al. | Apr 1980 | A |
4213150 | Robinson et al. | Jul 1980 | A |
4295198 | Copeland et al. | Oct 1981 | A |
4441205 | Berkin et al. | Apr 1984 | A |
4441206 | Kuniyoshi et al. | Apr 1984 | A |
4441248 | Sherman et al. | Apr 1984 | A |
4567610 | McConnell | Jan 1986 | A |
4581762 | Lapidus et al. | Apr 1986 | A |
4618989 | Tsukune et al. | Oct 1986 | A |
4637055 | Taylor | Jan 1987 | A |
4651341 | Nakashima et al. | Mar 1987 | A |
4672676 | Linger | Jun 1987 | A |
4685143 | Choate | Aug 1987 | A |
4707647 | Coldrenet et al. | Nov 1987 | A |
4736437 | Sacks et al. | Apr 1988 | A |
4763280 | Robinson et al. | Aug 1988 | A |
4783829 | Miyakawa et al. | Nov 1988 | A |
4799175 | Sano et al. | Jan 1989 | A |
4809348 | Meyer et al. | Feb 1989 | A |
4823394 | Berkin et al. | Apr 1989 | A |
4843631 | Steinpichler et al. | Jun 1989 | A |
4845765 | Juvin et al. | Jul 1989 | A |
4849914 | Medioni et al. | Jul 1989 | A |
4893346 | Bishop | Jan 1990 | A |
4903313 | Tachikawa | Feb 1990 | A |
4955062 | Terui | Sep 1990 | A |
4972359 | Silver et al. | Nov 1990 | A |
4979223 | Manns et al. | Dec 1990 | A |
4980971 | Bartschat et al. | Jan 1991 | A |
5003166 | Girod | Mar 1991 | A |
5020006 | Sporon-Fiedler | May 1991 | A |
5027417 | Kitakado et al. | Jun 1991 | A |
5033099 | Yamada et al. | Jul 1991 | A |
5040231 | Terzian | Aug 1991 | A |
5046109 | Fujimori et al. | Sep 1991 | A |
5048094 | Aoyama et al. | Sep 1991 | A |
5072384 | Doi et al. | Dec 1991 | A |
5111516 | Nakano et al. | May 1992 | A |
5161201 | Kaga et al. | Nov 1992 | A |
5168530 | Peregrim et al. | Dec 1992 | A |
5177559 | Bachelder et al. | Jan 1993 | A |
5206917 | Ueno et al. | Apr 1993 | A |
5245674 | Cass et al. | Sep 1993 | A |
5253306 | Nishio | Oct 1993 | A |
5268999 | Yokoyama | Dec 1993 | A |
5272657 | Basehore et al. | Dec 1993 | A |
5280351 | Wilkinson | Jan 1994 | A |
5313532 | Harvey et al. | May 1994 | A |
5343028 | Figarella et al. | Aug 1994 | A |
5343390 | Doi et al. | Aug 1994 | A |
5347595 | Bokser | Sep 1994 | A |
5351310 | Califano | Sep 1994 | A |
5384711 | Kanai et al. | Jan 1995 | A |
5406642 | Maruya | Apr 1995 | A |
5459636 | Gee et al. | Oct 1995 | A |
5471541 | Burtnyk et al. | Nov 1995 | A |
5481712 | Silver et al. | Jan 1996 | A |
5495537 | Bedrosian et al. | Feb 1996 | A |
5497451 | Holmes | Mar 1996 | A |
5513275 | Khalaj et al. | Apr 1996 | A |
5515453 | Hennessey et al. | May 1996 | A |
5524064 | Oddou et al. | Jun 1996 | A |
5537669 | Evans et al. | Jul 1996 | A |
5539841 | Huttenlocher et al. | Jul 1996 | A |
5544254 | Hartley et al. | Aug 1996 | A |
5545887 | Smith et al. | Aug 1996 | A |
5548326 | Michael | Aug 1996 | A |
5550763 | Michael | Aug 1996 | A |
5550937 | Bell et al. | Aug 1996 | A |
5555317 | Anderson | Sep 1996 | A |
5555320 | Irie et al. | Sep 1996 | A |
5557684 | Wang et al. | Sep 1996 | A |
5559901 | Lobregt | Sep 1996 | A |
5568563 | Tanaka et al. | Oct 1996 | A |
5570430 | Sheehan et al. | Oct 1996 | A |
5586058 | Aloni et al. | Dec 1996 | A |
5602937 | Bedrosian et al. | Feb 1997 | A |
5602938 | Akiyama et al. | Feb 1997 | A |
5613013 | Schuette | Mar 1997 | A |
5621807 | Eibert et al. | Apr 1997 | A |
5623560 | Nakajima et al. | Apr 1997 | A |
5625707 | Diep et al. | Apr 1997 | A |
5625715 | Trew et al. | Apr 1997 | A |
5627912 | Matsumoto | May 1997 | A |
5631975 | Riglet et al. | May 1997 | A |
5633951 | Moshfeghi | May 1997 | A |
5638116 | Shimoura et al. | Jun 1997 | A |
5638489 | Tsuboka | Jun 1997 | A |
5640200 | Michael | Jun 1997 | A |
5650828 | Lee | Jul 1997 | A |
5657403 | Wolff et al. | Aug 1997 | A |
5663809 | Miyaza et al. | Sep 1997 | A |
5673334 | Nichani et al. | Sep 1997 | A |
5676302 | Petry | Oct 1997 | A |
5686973 | Lee | Nov 1997 | A |
5694482 | Maali et al. | Dec 1997 | A |
5694487 | Lee | Dec 1997 | A |
5703960 | Soest | Dec 1997 | A |
5703964 | Menon et al. | Dec 1997 | A |
5708731 | Shimotori et al. | Jan 1998 | A |
5717785 | Silver | Feb 1998 | A |
5754226 | Yamada et al. | May 1998 | A |
5757956 | Koljonen et al. | May 1998 | A |
5768421 | Gaffin et al. | Jun 1998 | A |
5793901 | Matsutake et al. | Aug 1998 | A |
5796868 | Dutta-Choudhury et al. | Aug 1998 | A |
5815198 | Vachtsevanos et al. | Sep 1998 | A |
5822742 | Alkon et al. | Oct 1998 | A |
5825913 | Rostami et al. | Oct 1998 | A |
5825922 | Pearson et al. | Oct 1998 | A |
5828769 | Burns | Oct 1998 | A |
5828770 | Leis et al. | Oct 1998 | A |
5845007 | Ohashi et al. | Dec 1998 | A |
5845288 | Syeda-Mahmood | Dec 1998 | A |
5848184 | Taylor et al. | Dec 1998 | A |
5850466 | Schott et al. | Dec 1998 | A |
5850469 | Martin et al. | Dec 1998 | A |
5862245 | Renouard et al. | Jan 1999 | A |
5864779 | Fujimoto | Jan 1999 | A |
5871018 | Delp et al. | Feb 1999 | A |
5875040 | Matraszek et al. | Feb 1999 | A |
5881170 | Araki et al. | Mar 1999 | A |
5890808 | Neff et al. | Apr 1999 | A |
5912984 | michael et al. | Jun 1999 | A |
5912985 | Morimoto et al. | Jun 1999 | A |
5917733 | Bangham | Jun 1999 | A |
5926568 | Chaney et al. | Jul 1999 | A |
5930391 | Kinjo | Jul 1999 | A |
5933516 | Tu et al. | Aug 1999 | A |
5933523 | Drisko et al. | Aug 1999 | A |
5937084 | Crabtree et al. | Aug 1999 | A |
5940535 | Huang | Aug 1999 | A |
5943442 | Tanaka | Aug 1999 | A |
5950158 | Wang | Sep 1999 | A |
5970182 | Goris | Oct 1999 | A |
5974169 | Bachelder | Oct 1999 | A |
5982475 | Bruning et al. | Nov 1999 | A |
5987172 | Michael | Nov 1999 | A |
5995648 | Drisko et al. | Nov 1999 | A |
5995953 | Rindtorff et al. | Nov 1999 | A |
6002793 | Silver et al. | Dec 1999 | A |
6005978 | Garakani | Dec 1999 | A |
6021220 | Anderholm | Feb 2000 | A |
6023530 | Wilson | Feb 2000 | A |
6026186 | Fan | Feb 2000 | A |
6026359 | Yamaguchi et al. | Feb 2000 | A |
6035006 | Michael | Mar 2000 | A |
6035066 | Michael | Mar 2000 | A |
6052489 | Sakaue | Apr 2000 | A |
6061086 | Reimer et al. | May 2000 | A |
6064388 | Reyzin | May 2000 | A |
6064958 | Takahashi et al. | May 2000 | A |
6067379 | Silver | May 2000 | A |
6070160 | Geary et al. | May 2000 | A |
6078700 | Sarachik | Jun 2000 | A |
6081620 | Anderholm | Jun 2000 | A |
6111984 | Fukasawa | Aug 2000 | A |
6115052 | Sakaue | Sep 2000 | A |
6118893 | Li | Sep 2000 | A |
6122399 | Moed | Sep 2000 | A |
6128405 | Fuji | Oct 2000 | A |
6137893 | Michael et al. | Oct 2000 | A |
6151406 | Chang et al. | Nov 2000 | A |
6154566 | Mine et al. | Nov 2000 | A |
6154567 | McGarry | Nov 2000 | A |
6173066 | Peurach et al. | Jan 2001 | B1 |
6173070 | Michael et al. | Jan 2001 | B1 |
6178261 | Williams et al. | Jan 2001 | B1 |
6178262 | Picard et al. | Jan 2001 | B1 |
6215915 | Reyzin | Apr 2001 | B1 |
6226418 | Miller et al. | May 2001 | B1 |
6246478 | Chapman et al. | Jun 2001 | B1 |
6272244 | Takahashi et al. | Aug 2001 | B1 |
6272245 | lin | Aug 2001 | B1 |
6311173 | Levin | Oct 2001 | B1 |
6324298 | O'Dell et al. | Nov 2001 | B1 |
6324299 | Sarachik et al. | Nov 2001 | B1 |
6336082 | Nguyen et al. | Jan 2002 | B1 |
6345106 | Borer | Feb 2002 | B1 |
6363173 | Stentz et al. | Mar 2002 | B1 |
6381366 | Taycher et al. | Apr 2002 | B1 |
6381375 | Reyzin | Apr 2002 | B1 |
6385340 | Wilson | May 2002 | B1 |
6408109 | Silver et al. | Jun 2002 | B1 |
6421458 | Michael et al. | Jul 2002 | B2 |
6424734 | Roberts et al. | Jul 2002 | B1 |
6453069 | Matsugu et al. | Sep 2002 | B1 |
6457032 | Silver | Sep 2002 | B1 |
6462751 | Felser et al. | Oct 2002 | B1 |
6466923 | Young et al. | Oct 2002 | B1 |
6529852 | Knoll et al. | Mar 2003 | B2 |
6532301 | Krumm et al. | Mar 2003 | B1 |
6574353 | Schoepflin et al. | Jun 2003 | B1 |
6594623 | Wang et al. | Jul 2003 | B1 |
6625303 | Young et al. | Sep 2003 | B1 |
6636634 | Melikian et al. | Oct 2003 | B2 |
6658145 | Silver et al. | Dec 2003 | B1 |
6681151 | Weinzimmer et al. | Jan 2004 | B1 |
6687402 | Taycher et al. | Feb 2004 | B1 |
6690842 | Silver et al. | Feb 2004 | B1 |
6691126 | Syeda-Mahmood | Feb 2004 | B1 |
6691145 | Shibata et al. | Feb 2004 | B1 |
6714679 | Scola et al. | Mar 2004 | B1 |
6728582 | Wallack | Apr 2004 | B1 |
6748112 | Nguyen et al. | Jun 2004 | B1 |
6751338 | Wallack | Jun 2004 | B1 |
6751361 | Wagman | Jun 2004 | B1 |
6760483 | Elichai et al. | Jul 2004 | B1 |
6771808 | Wallack | Aug 2004 | B1 |
6785419 | Jojic et al. | Aug 2004 | B1 |
6850646 | Silver | Feb 2005 | B1 |
6856698 | Silver et al. | Feb 2005 | B1 |
6859548 | Yoshioka et al. | Feb 2005 | B2 |
6909798 | Yukawa et al. | Jun 2005 | B1 |
6950548 | Bachelder et al. | Sep 2005 | B1 |
6959112 | Wagman | Oct 2005 | B1 |
6963338 | Bachelder et al. | Nov 2005 | B1 |
6973207 | Akopyan et al. | Dec 2005 | B1 |
6975764 | Silver et al. | Dec 2005 | B1 |
6985625 | Silver et al. | Jan 2006 | B1 |
6993177 | Bachelder | Jan 2006 | B1 |
6993192 | Silver et al. | Jan 2006 | B1 |
7006712 | Silver et al. | Feb 2006 | B1 |
7016539 | Silver et al. | Mar 2006 | B1 |
7043055 | Silver | May 2006 | B1 |
7043081 | Silver et al. | May 2006 | B1 |
7058225 | Silver et al. | Jun 2006 | B1 |
7065262 | Silver et al. | Jun 2006 | B1 |
7088862 | Silver et al. | Aug 2006 | B1 |
7139421 | Fix et al. | Nov 2006 | B1 |
7164796 | Silver et al. | Jan 2007 | B1 |
7190834 | Davis | Mar 2007 | B2 |
7239929 | Ulrich et al. | Jul 2007 | B2 |
7251366 | Silver et al. | Jul 2007 | B1 |
8081820 | Davis et al. | Dec 2011 | B2 |
20020054699 | Roesch et al. | May 2002 | A1 |
20020158636 | Tyan et al. | Oct 2002 | A1 |
20020181782 | Monden | Dec 2002 | A1 |
20030103647 | Rui et al. | Jun 2003 | A1 |
20040081346 | Louden et al. | Apr 2004 | A1 |
20040136567 | Billinghurst et al. | Jul 2004 | A1 |
20050105804 | Francos et al. | May 2005 | A1 |
20050117801 | Davis et al. | Jun 2005 | A1 |
20050259882 | Dewaele | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
44 06 020 | Jun 1995 | DE |
4406020 | Jun 1995 | DE |
6378009 | Apr 1988 | JP |
6160047 | Jun 1994 | JP |
3598651 | Dec 2004 | JP |
WO-9718524 | May 1997 | WO |
Entry |
---|
Kervrann and Heitz, “Robust Tracking of Stochastic Deformable Models in Long Image Sequences”, 1994, Proceedings of the IEEE International Conference on Image Processing vol. 3, pp. 88-92. |
Zhong et. al., “Ojbect Tacking Using Deformable Templates”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22 No. 5, May 2000. |
Jain and Zhong, “Object Matching Using Deformable Templates”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 18, No. 3, Mar. 1996. |
Jain and Zhong, “Deformable Template models: A Review”, 1998 Elsevier Science, Signal Processing 71, pp. 109-129. |
Zitova and Flusser, “Image Registration Methods: A Survey”, Image and Vision Computing 21 (2003), pp. 977-1000, Elsevier 2003. |
Eric Marchand, Patrick Bouthemy, Francois Chaumette, A 2D-3D model-based approach to real-time visual tracking, Image and Vision Computing, vol. 19, Issue 13, Nov. 1, 2001, pp. 941-955, ISSN 0262-8856, DOI: 10.1016/S0262-8856(01)00054-3. |
Gdalyahu, Yoram et al., “Self-Organization in Vision: Stochastic Clustering for Image Segmentation, Perceptual Grouping, and Image Database Organization”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Inc., New York, US, vol. 23, No. 10, Oct. 2001, 1053-1074. |
Pauwels, E. J., et al., “Finding Salient Regions in Images”, Computer Vision and Image Understanding, Academic Press, San Diego, CA, US, vol. 75, No. 1-2 (Jul. 1999), 73-85. |
Scanlon, James et al., “Graph-Theoretic Algorithms for Image Segmentation”, Circuits and Systems, ISCAS '99 Proceedings of the 1999 IEEE International Symposium on Orlando, FL, IEEE, (May 30, 1999), 141-144. |
Shi, Jianbo et al., “Normalized Cuts and Image Segmentation”, Computer Vision and Pattern Recognition, Proceedings, IEEE Computer Society.Conference on San Juan, IEEE Comput. Soc., (Jun. 17, 1997), 731-737. |
Xie, Xuanli L., et al., “A New Fuzzy Clustering Validity Criterion and its Application to Color Image Segmentation”, Proceedings of the International.Symposium on Intelligent Control, New York, IEEE, (Aug. 13, 1991), 463-468. |
Mehrotra, Rajiv et al., “Feature-Based Retrieval of Similar Shapes”, Proceedings of the International Conference on Data Engineering, Vienna, IEEE Comp. Soc. Press, Vol COnf. 9, (Apr. 19, 1993), 108-115. |
Belongie, S. et al., “Shape Matching and Object Recognition Using Shape Contexts”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Inc. New York, vol. 24, No. 4, (Apr. 2003), 509-522. |
Ohm, Jens-Rainer “Digitale Bildcodierung”, Springer Verlag, Berlin 217580, XP0002303066, Section 6.2 Bewegungschatzung, (1995). |
Wei, Wen et al., “Recognition and Insprection of Two-Dimensional Industrial Parts Using Subpolygons”, Pattern Recognition, Elsevier, Kidlington, GB, vol. 25, No. 12, (Dec. 1, 1992), 1427-1434. |
Bileschi, S. et al., “Advances in Component-based Face Detection”, Lecture notes in Computer Science, Springer Verlag, New York, NY, vol. 2388, (2002), 135-143. |
Fitzpatrick, J M., et al., “Handbook of Medical Imaging”, vol. 2: Medical image Processing and Analysis, SPIE Press, Bellingham, WA, (2000), 447-513. |
Bookstein, F L., “Principal Warps: Thin-Plate Splines and the Decomposition of Deformations”, IEEE Transactions on pattern Analysis and Machine Intelligence, IEEE Inc., New York, vol. 11, No. 6, (Jun. 1, 1989). |
Zhang, Zhengyou “Parameter estimation techniques: A tutorial with application to conic fitting”, Imag Vision Comput; Image and Vision computing; Elsevier Science Ltd, Oxford England, vol. 15, No. 1, (Jan. 1, 1997). |
Stockman, G et al., “Matching images to models for registration and object detection via clustering”, IEEE Transaction of Pattern Analysis and Machine Intelligence, IEEE Inc., New York, vol. PAMI-4, No. 3(1982). |
Ballard, D. H., et al., “Generalizing the Hough Transform to Detect Arbitrary Shapes”, Pattern Recognition, vol. 13, No. 2 Pergamon Press Ltd. UK, (1981), pp. 111-122. |
Ballard, et al., “Searching Near and Approximate Location”, Section 4.2, Computer Vision, (1982), pp. 121-131. |
Brown, Lisa G., “A Survey of Image Registration Techniques”, ACM Computing Surveys, vol. 24, No. 4 Association for Computing Machinery, (1992), pp. 325-376. |
Caelli, et al., “Fast Edge-Only Matching Techniques for Robot Pattern Recognition”, Computer Vision, Graphics and Image Processing 39, Academic Press, Inc., (1987), pp. 131-143. |
Caelli, et al., “On the Minimum Number of Templates Required for Shift, Rotation and Size Invariant Pattern Recognition”, Pattern Recognition, vol. 21, No. 3, Pergamon Press plc, (1988), pp. 205-216. |
“Cognex 2000/3000/4000 Vision Tools”, Cognex Corporation, Chapter 2 Searching Revision 5.2 P/N 590-0103, (1992), pp. 1-68. |
“Cognex 3000/4000/5000 Programmable Vision Engines, Vision Tools”, Chapter 1 Searching, Revision 7.4 590-1036, (1996), pp. 1-68. |
Ballard, et al., “The Hough Method for Curve Detection”, Section 4.3, Computer Vision, (1982), pp. 121-131. |
“Cognex 3000/4000/5000 Programmable Vision Engines, Vision Tools”, Chapter 14 Golden Template Comparision, (1996), pp. 569-595. |
“Apex Search Object Library Functions”, Cognex Corporation, (1998). |
“Apex Search Object”, acuWin version 1.5, (1997), pp. 1-35. |
“Apex Model Object”, Cognex Corporation, acuWin version 1.5, (1997), pp. 1-17. |
“Description of Sobel Search”, Cognex Corporation, (1998). |
Crouzil, et al., “A New Correlation Criterion Based on Gradient Fields Similarity”, Proceedings of the 13th International Conference on Pattern Recognition vol. I Track A, Computer Vision, (1996), pp. 632-636. |
Grimson, et al., “On the Sensitivity of the Hough Transform for Object Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12. No. 3, (1990), pp. 255-274. |
Hsieh, et al., “Image Registration Using a New Edge-Based Approach”, Computer Vision and Image Understanding, vol. 67, No. 2, (1997), pp. 112-130. |
Rosenfeld, et al., “Coarse-Fine Template Matching”, IEEE Transactions on Systems, Man, and Cybernetics, (1997), pp. 104-107. |
Tian, et al., “Algorithms for Subpixel Registration”, Computer Vision Graphics and Image Processing 35, Academic Press, Inc., (1986), pp. 220-233. |
Joseph, S. H., “Fast Optimal Pose Estimation for Matching in Two Dimensions”, Image Processing and its Applications, Fifth International Conference, (1995). |
Geiger, et al., “Dynamic Programming for Detecting, Tracking, an Matching Deformable contours”, IEEE (1995), pp. 294-302. |
Cootes, T. F., et al., “Active Shape Models—Their Training and Application”, Computer Vision and Image Understanding, vol. 61, No. 1, (Jan. 1995), 38-59. |
Shi, Jianbo et al., “Normalized Cuts and Image Segmentation”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 22, No. 8, (Aug. 2000), 888-905. |
Borgefors, Gunilla “Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm”, IEEE Transaction on Pattern Analysis and Mchine Intelligence, vol. 10, No. 6, (Nov. 1988). |
Huttenlocher, Daniel P., “Comparing Images using the Hausdorff Distance”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 9, (Sep. 1993). |
“Complaint and Jury Demand”, US District Court, District of Massachusetts, Cognex Corp. and Cognex Technology and Investment Corp. v. MVTEC Software GmbH; MVTEC, LLC; and Fuji America Corp. Case No. 1:08-cv-10857-JLT, (May 21, 2008). |
“Fuji America'S Answer and Counterclaims”, United States District Court District of Massachusetts, Cognex Corp. and Cognex Technology and Investment Corp. v. MVTEC Software GmbH; MVTEC, LLC; and Fuji America Corp. Case No. 1:08-cv-10857-JLT, (Aug. 8, 2008). |
“Plaintiffs Cognex Corporation and Cognex Technology & Investment Corporation's Reply to Counterclaims of MVTEC Software GmbH and MVTEC LLC”, Cognex Corp. and Cognex Technology and Investment Corp. v. MVTEC Software GmbH; MVTEC, LLC; and Fuji America Corp. Case No. 1:08-cv-10857-JLT, (Aug. 2008). |
Wallack, Aaron S., “Robust Algorithms for Object Localization”, International Journal of Computer Vision, (May 1998), 243-262. |
P. Tissainayagam et al., Contour Tracking with Automatic Motion Model Switching, Pattern Recognition (36), 2003, pp. 2411-2427. |
Perkins, W.A. Inspector: A Computer Vision System that learns to Inspect Parts, IEEE Transactions on Pattern Analysis and Machine Vision Intelligence, vol. PAMI-5, No. 6, (Nov. 1983). |
Cox, et al., Predicting and Estimating the Accuracy of a Subpixel Registration Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 8, (Aug. 1990), 721-734. |
Feldmar, et al., 3D-2D Projective Registration of Free-Form Curves and Surfaces, Computer Vision and Image Understanding, vol. 65, No. 3, (Mar. 1997), 403-424. |
Jain, et al., Object Matching Using Deformable Templates, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 3, (Mar. 1996), 267-278. |
Wells, et al., “Statistical Object Recognition”, Submitted to the Department of Electrical Engineering and Computer Science, (Nov. 24, 1992), 1-177. |
Zhang, et al., Iterative Point Matching for Registration of Free-Form Curves, (2004), 1-42. |
Lu, Shape Registration Using Optimization for Mobile Robot Navigation, Department of Computer Science, University of Toronto, (1995), 1-163 (submitted in 2 parts). |
Gennery, Donald B. Visual Tracking of Known Three-Dimensional Objects, International Journal of Computer Vision, (1992), 243-270. |
Chew, et al., Geometric Pattern Matching under Euclidean Motion, Computational Geometry, vol. 7, Issues 1-2, Jan. 1997, pp. 113-124, 1997 Published by Elsevier Science B.V. |
Alexander, et al., The Registration of MR Images Using Multiscale Robust Methods, Magnetic Resonance Imaging, pp. 453-468, vol. 14, No. 5, 1996. |
Anuta, Paul E., Spatial Registration of Multispectral and Multitemporal Digital Imagery Using Fast Fourier Transform Techniques, IEEE Transactions on Geoscience Electronics, pp. 353-368, vol. GE-8, No. 4, Oct. 1970. |
Araujo, et al., A Fully Projective Formulation for Lowe's Tracking Algorithm, The University of Rochester Computer Science Department, Technical Report 641, pp. 1-41, Nov. 1996. |
Besl, et al., A Method for Registration of 3D Shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 239-256, vol. 14, No. 2, Feb. 1992. |
Cognex Corporation, Description of Overlap in Cognex search tool and description of Overlap in Cnlpas Tool as of Jul. 12, 1997. |
Cognex, Cognex Products on Sale as of one year before Jul. 12, 1997. |
Dementhon, et al., Model-Based Object Pose in 25 Lines of Code, International Journal of Computer Vision, pp. 123-141, Kluwer Academic Publishers, Boston, MA, 1995. |
Han, et al., An Edge-Based Block Matching Technique for Video Motion, Image Processing Algorithms and Techniques II, pp. 395-408, vol. 1452, 1991. |
Hashimoto, et al., High-Speed Template Matching Algorithm Using Information of Contour Points, Systems and Computers in Japan, pp. 78-87, vol. 23, No. 9, 1992. |
Lamdan, et al., Affine Invariant Model-Based Object Recognition, IEEE Transactions on Robotics and Automation, pp. 578-589, vol. 6, No. 5, Oct. 1990. |
Neveu, et al., Two-Dimensional Object Recognition Using Multiresolution Models, Computer Vision, Graphics, and Image Processing, pp. 52-65, 1986. |
Olson, et al., Automatic Target Recognition by Matching Oriented Edge Pixels, IEEE Transactions on Image Processing, pp. 103-113, vol. 6, No. 1, Jan. 1997. |
Pratt, William K., Digital Image Processing, Sun Microsystems, Inc., pp. 651-673, 1978. |
Seitz, Peter Using Local Orientational Information as Image Primitive for Robust Object Recognition, Visual Communications and Image Processing IV, pp. 1630-1639, vol. 1199, 1989. |
Suk, et al., New Measures of Similarity Between Two Contours Based on Optimal Bivarate Transforms, Computer Vision, Graphics and Image Processing 26, pp. 168-182, 1984. |
Wunsch, et al., Registration of CAD-Models to Images by Iterative Inverse Perspective Matching, Proceedings of IEEE, 1996 pp. 78-83. |
Yamada, Hiromitsu Map Matching-Elastic Shape Matching by Multi-Angled Parallelism, Trans. IEICE Japan D-11, pp. 553-561, vol. J73-D-II, No. 4, Apr. 1990. |