The invention pertains to machine vision and, more particularly, three-dimensional (3D) machine vision. The invention has application in manufacturing, quality control, and robotics, to name but a few fields.
Machine vision refers to the automated analysis of images to determine characteristics of objects represented in them. It is often employed in automated manufacturing lines, where images of components are analyzed to facilitate part-picking, as well as to determine part placement and alignment for assembly. When robots are the means for automated assembly and automated analysis of images is used to facilitate part picking, placement, and alignment, the system is referred to as vision-guided robotics. Machine vision is also used for robot navigation, e.g., to insure the recognition of scenes as robots travel through environments.
Though three-dimensional (3D) analysis has long been discussed in the literature, most present-day machine vision systems rely on two-dimensional (2D) image analysis. This typically necessitates that objects under inspection be “presented to” the vision system in constrained orientations and locations. A conveyor belt is commonly used for this purpose. Objects being assembled or inspected are typically placed at a particular known, stable, 3D configuration on the belt, but at an unknown position and orientation and moved to within the vision system's field of view. Based on an object's 2D pose (i.e., position and orientation) in the field of view, and taking into account that it is disposed on the conveyor (thereby, rendering certain its “lie” and its distance from the vision system camera), the system applies 2D geometry to determine the object's exact 3D pose and/or conformance with expected appearance.
Examples using such 2D vision analysis are provided in prior works of the assignee hereof, including U.S. Pat. No. 6,748,104, entitled “Methods and apparatus for machine vision inspection using single and multiple templates or patterns”, U.S. Pat. No. 6,639,624, entitled “Machine vision methods for inspection of leaded components”, U.S. Pat. No. 6,301,396, entitled “Non-feedback-based machine vision methods for determining a calibration relationship between a camera and a moveable object”, U.S. Pat. No. 6,137,893, entitled “Machine vision calibration targets and methods of determining their location and orientation in an image”, U.S. Pat. No. 5,978,521, entitled “Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object”, U.S. Pat. No. 5,978,080, entitled “Machine vision methods using feedback to determine an orientation, pixel width and pixel height of a field of view”, U.S. Pat. No. 5,960,125, entitled “Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable object,” U.S. Pat. No. 6,856,698, entitled “Fast high-accuracy multi-dimensional pattern localization”, U.S. Pat. No. 6,850,646, entitled “Fast high-accuracy multi-dimensional pattern inspection”, and U.S. Pat. No. 6,658,145, entitled “Fast high-accuracy multi-dimensional pattern inspection,” to name a few.
With the increased reliance on robotics, everywhere from the factory floor to the home, the need for practical 3D vision systems has come to the fore. This is because, in many of these environments, objects subject to inspection are not necessarily constrained in overall position and lie, e.g., as might otherwise be the case with objects presented on a conveyor belt. That is, the precise 3D configuration of the object may be unknown.
To accommodate the additional degrees of freedom of pose and position in a 3D scene, 3D vision tools are helpful, if not necessary. Examples of these include U.S. Pat. No. 6,771,808, entitled, “System and method for registering patterns transformed in six degrees of freedom using machine vision”, and U.S. Pat. No. 6,728,582, entitled, “System and method for determining the position of an object in three dimensions using a machine vision system with two cameras.”
Other machine vision techniques have been suggested in the art. Some require too much processor power to be practical for real-time application. Others require that objects subject to inspection go though complex registration procedures and/or that, during runtime, many of the objects' features be simultaneously visible in the vision system field-of-view.
Outside the machine vision realm, the art also provides contact-based methods of determining 3D poses—such as using an x,y,z measuring machine with a touch sensor. However, this requires contact, is relatively slow and can require manual intervention. Electromagnetic wave-based methods for determining 3D poses have also been offered. These do not require physical contact, but suffer their own drawbacks, such as requiring the oft impractical step of affixing transmitters to objects that are subject to inspection.
An object of this invention is to provide improved methods and apparatus for machine vision and, more particularly, for three-dimensional machine vision.
A related object of this invention is to provide such methods and apparatus as have a range of practical applications including, but not limited to, manufacturing, quality control, and robotics.
A further related object of the invention is to provide such methods and apparatus as permit determination of, for example, position and pose in three-dimensional space.
A still further related object of the invention is to provide such methods and apparatus as impose reduced constraints, e.g., as to overall position and lie, of objects under inspection.
Yet still a further related object of the invention is to provide such methods and apparatus as minimize requirements for registration of objects subject to inspection.
Still yet a further object of the invention is to provide such methods and apparatus as can be implemented in present day and future machine vision platforms.
The foregoing are among the objects attained by the invention, which provides inter alia methods and apparatus for determining the pose,e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of the pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object.
Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object. A runtime step triangulates locations in 3D space of one or more of those patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during the calibration step.
Further aspects of the invention provide methods as described above in which the runtime step triangulates locations from images of the object taken substantially simultaneously by the multiple cameras.
Still further objects of the invention provide such methods including a re-calibration step in which runtime images of the object are used to discern the aforementioned mapping function, e.g., for a camera that has gone out of calibration. Thus, for example, if one camera produces images in which the patterns appear to lie at locations (e.g., when mapped to the 3D rays for that camera) inconsistent and/or in substantial disagreement with images from the other cameras (e.g,. when mapped using their respective 3D rays), pattern locations determined with the images from those other cameras can be used to re-calibrate the one camera.
Yet still further aspects of the invention provide methods as described above in which the calibration step includes positioning registration targets (such as bulls eyes, cross-hairs, or the like, e.g., on calibration plates or otherwise) at known positions in 3D space and recording—or otherwise characterizing, e.g., algorithmically—correlations between those positions and the pixel-wise locations of the respective targets in the cameras' fields of view. Related aspects of the invention provide such methods in which one or more of those registration targets, calibration plates, etc., are used to calibrate multiple cameras at the same time, e.g., by way of simultaneous imaging.
Other aspects of the invention provide methods as described above in which the calibration step includes discerning a mapping function for each camera that takes into account warping in the field of view.
Further aspects of the invention include methods as described above in which the training step includes training functionality associated with the cameras to recognize expected patterns, such as letters, numbers, other symbols (such as registration targets), corners, or other discernible features (such as dark and light spots) of the object and, for example, for which measurement techniques and search/detection models are known in the art.
Further related aspects of the invention provide such methods in which the training step includes training the aforementioned functionality as to the “model points,”—i.e., expected locations in 3D space of the patterns (e.g., in absolute or relative terms) on objects that will be inspected at runtime. In combination with the triangulated 3D locations discerned from those images, that information can be used, during the runtime step, to discern the pose of that object.
According to aspects of the invention, training as to expected locations of the patterns (i.e., model points) includes finding 2D poses of a reference point (or “origin”) of each such pattern. For patterns that are expected to appear in the fields of view of two or more cameras, such reference points facilitate triangulation, as described below, for purposes of determining the position of those patterns (and, therefore, of the object) in 3D space.
Related aspects of the invention provide such methods in which training as to expected patterns includes utilizing—within the functionality associated with each camera—like models for training like expected patterns as between different cameras. This has the benefit of insuring that the reference points (or origins) for patterns found at runtime will coincide as between images obtained by those different cameras.
Further related aspects of the invention provide such methods in which training as to expected patterns includes utilizing—within the functionality associated with each camera—different models for like patterns as between different cameras. This facilitates finding patterns, e.g., when pose, viewing angle and/or obstructions alter the way that different cameras will image those patterns.
Related aspects of the invention provide such methods that include training the selection of reference points (or origins) of patterns so modeled. Such training can be accomplished, for example, by an operator, e.g., using a laser pointer or otherwise, in order to insure that those reference points (or origins) coincide as between images obtained by those different cameras.
Related aspects of the invention provide such methods in which the training step includes discerning the location of the patterns, for example, by utilizing a triangulation methodology similar to that exercised during the runtime phase. Alternatively, the expected (relative) locations of the patterns can be input by the operators and/or discerned by other measurement methodologies.
Further related aspects of the invention provide such methods in which the training step includes finding an expected pattern in an image from one (or more) camera(s) based on prior identification of that pattern in an image from another camera. Thus, for example, once the operator has identified an expected pattern in an image taken from one camera, the training step can include automatically finding that same pattern in images from the other cameras.
Still further aspects of the invention provide methods as described above in which the training step acquires multiple views of the object for each camera, preferably, such that the origins of the patterns found on those objects are consistently defined. To account for potential inconsistency among images, that produce the highest match score for the patterns can be used. This has the benefit of making the methodology more robust to finding parts in arbitrary poses.
Yet in still other aspects of the invention, the runtime step includes triangulating the position of one or more of the patterns in runtime images, e.g., using pattern-matching or other two-dimensional vision tools, and using the mappings discerned during the calibration phase to correlate the pixel-wise locations of those patterns in the respective camera's fields of view with the aforementioned 3D rays on which those patterns lie.
According to related aspects of the invention, triangulation of pattern location may be by “direct” triangulation, e.g., as where the location of a given pattern is determined from the point of intersection (or the point of least squares fit) of multiple 3D rays (from multiple cameras) on which that pattern lies. Alternatively, or in addition, triangulation may be “indirect,” as where the location of a given pattern is determined not only from the ray (or rays) on which that pattern lies, but also from (i) the rays on which the other patterns lie, and (ii) the relative locations of those patterns to one another (e.g., as determined during the training phase).
Other aspects of the invention provide methods as described above in which functionality associated with the cameras “times out” if it fails to find an expected pattern in an image of an object—during training or runtime—thereby avoiding undue delay in position determination, e.g., if such a pattern is missing, occluded or otherwise not detected.
Yet still other aspects of the invention parallel the methods described above in which ID matrix codes (or other patterns whose appearance and/or positions are pre-defined or otherwise known) are used in place of the patterns discussed above. In these aspects of the invention, the training step is obviated or reduced. Instead, the 2D positions of those codes (or other patterns) can be discerned from the training-phase or runtime images, e.g., by vision tools designed for generic types of features, in order to map to 3D locations.
Still other aspects of the invention provide machine vision systems, e.g., including digital processing functionality and cameras, operating in accord with the methods above. These and other aspects of the invention are evident in the drawings and in the description that follows.
A still further related aspect of the invention provides such methods and apparatus as permit the inspection of an object, e.g. to determine and validate relative positions of portions thereof. Such methods and apparatus can be used, by way of non-limiting example, to support inspection and verification, for example, during an assembly, quality assurance, maintenance or other operation.
Further related aspects of the invention provide such methods and apparatus which infer the absence or misplacement of a part (or other portion) of an object in instances where one or more expected patterns (e.g., associated with that part/portion) are absent from runtime images or are present in those images, but at pixel locations that map to 3D locations that are not expected or desirable.
Still further related aspects of the invention provide such methods and apparatus wherein, during the runtime step, the positions of parts or other portions of the object are determined based on subsets of 3D locations corresponding to patterns found in runtime images, and wherein those 3D locations are used to determine expected locations of still further patterns. The expected locations of those further patterns can be compared with their actual 3D locations, e.g., as determined from the runtime images. Where positional differences identified in the comparison exceed a designated tolerance, the system can generate appropriate notifications (e.g., to the operator).
Advantages of systems and methods according to the invention are that they are easier to use and more practical than prior art systems approaches—yet, are vision-based and, hence, do not require contact with or prior preparation of objects subject to inspection. Such systems and methods (according to the invention) can be easily setup, and then trained using “show-and-go”.
In addition, they provide speedy performance and robustness, e.g., with respect to missing and incorrect results. Thus, for example, methods and apparatus according to aspects of the invention can determine the pose of objects even though some patterns are not found in some (and, under some circumstances, any) of the runtime images, e.g., because the patterns are occluded from view by one or more cameras or because images of those patterns could not be timely acquired. By way of further example, methods and apparatus according to aspects of the invention provide robustness with respect to incorrect results (e.g., caused by misaligned cameras) by triangulating using subsets of the 3D locations corresponding patterns found in runtime images: if one of the subsets results in a lower sum-squared error, that subset can be used for position triangulation, rather than all of the patterns.
A more complete understanding of the invention may be attained by reference to the drawings, in which:
System 10 further includes digital data processor 22 and image acquisition devices 24. Digital data processor 22, here, depicted as an iMac® G5 personal computer for simplicity, may be a mainframe computer, workstation, personal computer (e.g., running a Windows®/Intel Pentium 4 platform, or otherwise), dedicated vision computer, embedded processor or other digital data device, running an proprietary, open source or other operating system, that is programmed or otherwise configured in accord with the teachings hereof to determine the pose of object 12 from images supplied by acquisition devices 24. The digital data processor may include a display 22a, as shown, as well as keyboard 22b, mouse 22c and other input/output devices, all of the type known in the art.
Image acquisition devices 24 may be machine vision cameras, video cameras, still cameras or other devices capable of acquiring images of object 12 in the visible or other relevant spectrum. Without loss of generality, in the text that follows the devices 24 are typically referred to as “cameras”—though, in practice, they may comprise any manner of image acquisition functionality. In the illustrated embodiment, three such devices 24 are shown, though, in practice any plurality of devices (e.g., two or more) may be employed. Those devices are disposed to acquire images of the object 24 from different respective viewpoints. It will also be appreciated to those skilled in the art that, in some embodiments, the 3D pose of an object under inspection can also be determined using images from a single such device 24 and, hence, that not all embodiments require images from multiple cameras.
Digital data processor 22 further includes a central processor (CPU), memory (RAM) and input/output (I/O) functionality of the type known in the art, albeit programmed for operation in accord with the teachings hereof.
Particularly, in the illustrated embodiment, these are configured to provide 3D machine vision in accord with the method shown in
Referring to
In the illustrated embodiment, the registration targets, calibration plates 40, etc., are used to calibrate multiple cameras 24 at the same time, e.g., by way of simultaneous imaging. Thus, by way of example, the operator can place a target in the field of view of two or more of the image acquisition devices 24, which simultaneously image the target for calibration purposes. Where calibration plates 40, or the like, are used for calibration, they preferably show a fiducial 41 (e.g., a unique pattern which differs from the uniform checkerboard pattern) at an origin so that all acquisition devices 24 can be calibrated with respect to the same unique reference point with specified orientation. By so calibrating the devices 24 in a consistent manner, they can all be used to map from their image coordinates (e.g., the pixel-wise locations of pattern origins) to a common reference point or frame. Preferred such fiducials are asymmetric, as in the case of the L-shaped fiducial in the drawing.
Underlying methodologies and apparatus for such calibration are taught, by way of non-limiting example, in U.S. Pat. No. 6,748,104, entitled “Methods and apparatus for machine vision inspection using single and multiple templates or patterns”, U.S. Pat. No. 6,639,624, entitled “Machine vision methods for inspection of leaded components”, U.S. Pat. No. 6,301,396, entitled “Nonfeedback-based machine vision methods for determining a calibration relationship between a camera and a moveable object”, U.S. Pat. No. 6,137,893, entitled “Machine vision calibration targets and methods of determining their location and orientation in an image”, U.S. Pat. No. 5,978,521, entitled “Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object”, U.S. Pat. No. 5,978,080, entitled “Machine vision methods using feedback to determine an orientation, pixel width and pixel height of a field of view”, U.S. Pat. No. 5,960,125, entitled “Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable object,” U.S. Pat. No. 6,856,698, entitled “Fast high-accuracy multi-dimensional pattern localization”, U.S. Pat. No. 6,850,646, entitled “Fast high-accuracy multi-dimensional pattern inspection”, and U.S. Pat. No. 6,658,145, entitled “Fast high-accuracy multi-dimensional pattern inspection,” the teachings of all of which are incorporated herein by reference. The methodologies and apparatus described in the latter three patents are referred to elsewhere herein by the name “PatMax.”
In an optional training step 32, a module (e.g., a code sequence, subroutine, function, object, other data structure and/or associated software, or other functionality) associated with each respective camera 24 is trained to recognize expected patterns in images to be acquired, during runtime, of the object 12. These may be letters, numbers, other symbols (such as registration targets), corners, or other features (such as dark and light spots) that are expected to be discernible in the runtime images of the object 12 and, for example, for which measurement techniques and search/detection models are known in the art. Those patterns may be permanently part of, or affixed to, the object. However, they may also be temporary, e.g., as in the case of removable calibration targets. Indeed, they need not be even physically associated with the object. For example, they may be optically or otherwise projected onto objects that are imaged during training and/or runtime phases, e.g., by a laser or other apparatus.
In addition to training the modules or other functionality associated with each camera 24 to recognize pattern, training step 32 of the illustrated embodiment includes training them as to the model point locations, i.e., the expected locations of the patterns, e.g., relative to one another (i.e., in 3D space) on objects that will be inspected during runtime. This can be, for example, by utilizing a triangulation methodology similar to that exercised during the runtime phase. Alternatively, the expected (relative) locations of the patterns can be input by the operator and/or discerned by other measurement methodologies (e.g., rulers, calipers, optical distance gauges, and so forth).
Regardless of whether the triangulation or other methodologies are used, the training step 32 preferably includes training the modules or other functionality or other functionality associated with each camera 24 as to a reference point (or “origin”) of each such trained pattern. For patterns that are expected to appear in the fields of view of two or more cameras, training as to such reference points facilitate direct and indirect triangulating of the position of those patterns and/or of the object in 3D space.
In the illustrated embodiment, such training can be effected by using like models (e.g., “PatMax”, or so forth) for training like expected patterns as between different cameras 24. This has the benefit of insuring that the reference points (or origins) for patterns found at runtime will coincide as between images obtained by those different cameras.
Where pose, viewing angle and/or obstructions alter the way that different cameras 24 will view like patterns, such training can include utilizing different models (for like patterns) as between different cameras. Since different models may tend to identify different reference points for like patterns, the illustrated embodiment permits an operator to train the selection of like reference points for like patterns.
This can be accomplished, by way of example, during training step 32, by simultaneously acquiring images of the pattern from multiple cameras 24 (to be used as a template for searching in step 34) and, then, shining a laser pointer at the object. From images acquired with the laser shining, the 3D location of the laser point can be computed, thereby, defining coincident origins on all images of the patterns. (Though described, here, with respect to use disparate models for pattern training, this technique can be applied, as well, in instances where like models are used). To this end, using the images with and without the superfluous laser pointer spot, auto Thresholding and blob analysis can be run to find the center of the spot in all the images, and thereby to determine consistent coincident origins. As discussed elsewhere herein, triangulation can be used to get the 3D position of the spot, thereby, permitting use of this technique for multiple patterns on the (training) object provided, for example, that it does not move.
According to one preferred practice of the invention, the training step 32 includes finding an expected pattern in an image from one (or more) camera(s) 24 based on prior identification of that pattern in an image from another camera. Thus, for example, once the operator has identified an expected pattern in an image taken from one camera, the training step can include automatically finding that same pattern in images from the other cameras.
In preferred embodiments, step 32 includes acquiring multiple views of the object for each camera 24, preferably, such that the origins of the patterns found on those objects are consistently defined. To account for potential inconsistency among images, those that produce the highest match score for the patterns can be used. This has the benefit of making the methodology more robust to finding parts in arbitrary poses.
As noted above, the training step 32 is optional: in some embodiments of the invention, it is employed in reduced capacity or not at all. For example, if the patterns expected at runtime are susceptible to search via a blob model (e.g., one that looks for bright features), then no patterns training is required—though, position training of the type described above will still be employed. Such is also true, by way of further example, if the patterns are ID matrix codes (or other patterns whose appearance and/or positions are pre-defined or otherwise known) are used in place of the trainable patterns discussed above. Here, the 2D positions of those codes (or other patterns) are discerned from the training-phase or runtime images, e.g., by vision tools designed for generic types of features, in order to map to 3D locations. Such implementations of the invention are useful because industrial parts might always have ID Matrix codes, and multiple training-less sensors could thereby output 3D positions of the ID Matrix code. Furthermore, since an ID Matrix code spans a rectangular area all of the sensors could output the 2D positions of the 4 corners; furthermore, if the ID Matrix is printed with a particular process, then we can know the 3D positions (by virtue of the found size/type of the code) and compute 3D pose.
In runtime step 34, the digital data processor 22 triangulates locations in 3D space of one or more of patterns 42a-42c on the object 12 based on pixel-wise positions of representations of those patterns in runtime images of the object 12 and from the mappings discerned during calibration step 32. In the illustrated embodiment, those runtime images are preferably acquired simultaneously, or substantially simultaneously, of the object 12 by the devices 24. In this regard, substantially simultaneously refers to image acquisition occurring so nearly close in time that movement of the object 12, devices 24, frame 20, or otherwise, does not substantially affect the pixel-wise location of patterns in the runtime images and/or mappings determined therefrom. Such simultaneous acquisition can be achieved by firing the cameras 24 at the same time (or nearly so) or by other means—including, for example, stroboscopically illuminating the imaged object while the camera 24 shutters are open.
In the illustrated embodiment, position triangulation is accomplished using pattern matching or other two-dimensional vision tools to discern the pixel-wise location of patterns in the runtime images, and using the mappings discerned during the calibration phase to correlate the pixel-wise locations of those patterns in the respective camera's 24 fields of view with the aforementioned 3D rays on which those patterns lie. Examples using such 2D vision tools include aforementioned, incorporated-by-reference U.S. Pat. Nos. 6,748,104, 6,639,624, 6,301,396, 6,137,893, 5,978,521, 5,978,080, 5,960,125, 6,856,698, 6,850,646, and 6,658,145.
Triangulation of pattern location may be by “direct” triangulation, e.g., as where the location of a given pattern is determined from the point of intersection of multiple 3D rays (from multiple cameras) on which that pattern lies. Alternatively, or in addition, triangulation may be “indirect,” as where the location of a given pattern is determined not only from the ray (or rays) on which that pattern lies, but also from (i) the rays on which the other patterns lie, and (ii) the relative locations of those patterns to one another (e.g., as determined during the training phase) on the imaged object.
In the illustrated embodiment, direct and/or indirect triangulation can utilize “least squares fit” or other such methodologies for finding points of intersection (or nearest intersection) as between or among multiple 3D rays (from multiple cameras 24) on which pattern(s) appear to lie. For example, where images acquired from two or more cameras 24 indicate that a given pattern (and, more precisely, the apparent origin of that pattern) lies on two or more rays, a least squares fit methodology can be employed to determine a location of intersection of those rays in 3D space or a nearest point thereto (i.e., a point in space that lies nearest those rays). Likewise, where images from the cameras 24 indicate origins for multiple patterns on multiple rays, a least squares fit can be employed using the model points of those patterns on the object to determine the most likely locations of the patterns and/or the object itself.
The illustrated embodiment utilizes an optimizer (or “solver”) to find the least squares (or root mean square) fit of rays and patterns. This can be a general purpose tool of the type available in the art and/or it can operate in the manner detailed below. In any event, during runtime step 34, the solver is supplied with definitions of the 3D rays on which the patterns (and, more precisely, the pattern origins) identified from the runtime images lie, as well as (where relevant) the locations or relative locations of the patterns on the object.
Typically, this information defines an over-constrained system (i.e., more information is supplied by way of ray definitions and relative pattern locations on the object than is necessary to infer the actual locations), a fact on which the illustrated system capitalizes for purposes of robustness. Thus, for example, the runtime step 34 can determine object pose, e.g., even where patterns are missing from the object or its runtime image (e.g., as where a pattern is occluded from one or more camera views, or where lighting or other conditions to not permit timely acquisition of a pattern image). And, by way of further example, the runtime step 34 can include trying subsets of the pattern origins (and, more precisely, subsets of the locations corresponding to pattern origins) found by the acquisition devices 24 in order to minimize the root mean square (RMS) error of the fit between the rays and the model points or the triangulation of the rays. If one of the subsets has a lower sum squared error, that subset can be used for position triangulation, rather than all of the pattern origins.
Typically, this information defines an over-constrained system (i.e., more information is supplied by way of ray definitions and relative pattern locations on the object than is necessary to infer the actual locations), a fact on which the illustrated system capitalizes for purposes of robustness. Thus, for example, the runtime step 34 can determine object pose, e.g., even when certain patterns are intentionally omitted from consideration (e.g., so as to inspect/validate a pattern's position by comparing it to the position predicted by the other found patterns).
And, by way of further example, the runtime step 34 can include trying subsets of the pattern origins (and, more precisely, subsets of the locations corresponding to pattern origins) found by the acquisition devices 24 in order to minimize the root mean square (RMS) error of the fit between the rays and the model points or the triangulation of the rays. Then, the step can extrapolate the 3D position of the patterns which were not included in the subset (i.e., that were intentionally omitted as mentioned above) and predict the 2D image positions in their respective cameras. The predicted image position can be compared to the actual measured image position: if the distance between the predicted image position and the actual measured image position exceeds some user-specified distance tolerance, then the system can generate an appropriate warning or other notification. Alternatively, or in addition, the extrapolated 3D positions of the omitted patterns can be compared against 3D positions determined by triangulation; again, where the distance between extrapolated (predicted) and actual positions differ, the runtime step 34 can include generating an appropriate warning or other notification.
In order to improve the speed of pattern recognition, during both training and runtime phases, the illustrated embodiment can exploit the found position of one pattern to limit the search degrees of freedom for the other pattern. For example, if a first camera 24 finds the pattern at 15 degrees, and another camera is approximately in the same orientation as the first camera, then it may only need to look for the pattern at orientations 15+/−10 degrees. In addition, given the origin's position from one camera, we know that the origin will lie along a 3D ray; thereby, we can project that ray onto the second camera's field of view, and only look for the pattern along that line.
If two patterns are confusable (i.e., there are two instances similar to the pattern in the camera's field of view), the illustrated embodiment can try all of the different possible correspondences. For example, using the technology described in aforementioned, incorporated-by-reference U.S. Pat. No. 6,856,698, entitled “Fast high-accuracy multidimensional pattern localization”, U.S. Pat. No. 6,850,646, entitled “Fast high-accuracy multi-dimensional pattern inspection”, and U.S. Pat. No. 6,658,145, and entitled “Fast high-accuracy multi-dimensional pattern inspection,” patterns such as the character sequences “P”, “ST”, “It”, and “Notes” (from a POST-IT® Notes label) are all different, so when a match is found, we know it is a correct match.
Alternatively, the machine vision tool known as “blob” analysis can be used to find patterns (e.g., if they are dark holes). In this case, it could be hypothesized that blob #1 corresponds to 3Dmodel point 1, and blob#2 corresponds to 3Dmodel point2, etc. If that doesn't work, then the analysis can move on to the next hypothesis: that blob #2 corresponds to 3D model point #2, and blob#1 corresponds to 3D model point #1.
A more complete understanding of the triangulation process of the illustrated embodiment may be appreciated from the discussion that follows.
To intersect n 3D rays (i.e., to find the point which minimizes the sum squared distance to n 3D rays), first characterize each ray as two separate orthogonal planes (since the squared distance from a point to a ray is the sum of the squared distances of the point to two orthogonal planes intersecting that ray). This is exemplified by the C++ code below:
To solve for the pose which best maps the 3D points onto the corresponding 3D rays, we can say that we use the following equations (which are expressed in the Maple math package, which is commercially available from Maplesoft, a division of Waterloo Maple) which is used to generate optimized C code.
The approach solves for the pose (which is expressed in terms of the variables a,b,c,d,tx,ty,tz) which minimizes the sum squared error between points p (which are expressed as x,y,z) and planes expressed as (px, py, pz, pt). Note that each 3d ray corresponds to two such plane constraints. The approach computes the sum squared error by summing up the coefficients of the algebraic error function. And then, the approach solves for the optimal a,b,c,d,tx,ty,tz using gradient descent. Note that since there are 7 variables (a,b,c,d,tx,ty,tz) and only 6 degrees of freedom, we try 4 different cases—where a is set to 1 and the variables are b,c,d; where b is set to 1 and the variables are a,c,d; where c is set to 1 and the variables are a,b,d; and where d is set to 1 and the variables are a,b,c.
The foregoing will be further appreciated in view of the following, in which GenericPoly( ) is a function which extracts the coefficients of a function. Thus, if the function is x*x+2*x*y+y*y, then the generic function is f0x2y0*x*x+f0x1y1*x*y+f0x0y2*y*y where f0x2y0=1, f0x1y1=2, f0x0y2=1. GenericPoly( ) is included in MARS, a Maple Matlab Resultant Solver system publicly and freely available, e.g., by way of non-limiting example from www.cs.unc.edu/˜geom/MARS.
(Note that since the error function is not just derivative(weightMat*weightMat) but it is actually (a*a+b*b+c*c+d*d)*derivative(weightMat*weightMat) minus 4*a*weightMat*weightMat (for computing the partial derivative with respect to a) (which is written as:
unit*diff(eval (genPoly[1]),a)−eval (genPoly[1])*4*a));)
because of the chain rule for quotients:
deriv (F(x)/G(x))==(G(x)*F′(x)−F(x)*G′(x))/(G(x)*G(x))
note that we can ignore the square of the denominator (in the denominator of the chain rule for partial derivatives) for this analysis because the denominator (a*a+b*b+c*c+d*d) applies uniformly to all partial derivatives:
d((a*a+b*b+c*c+d*d)^2)/da=4*a*(a*a+b*b+c*c+d*d))
By way of further explanation of the foregoing, numerical gradient descent methods make use of an error function, as well as derivatives of that error function. The derivates of an error function can be computed numerically or symbolically. For numerically-computed derivates, one can simply change one of the variables by a small amount, and then recompute the error function, and thusly numerically compute the derivative. For symbolically computed derivatives, one needs a symbolic function corresponding to the derivative—which we have in this case because we have the algebraic expressions describing the error function, and we can symbolically differentiate that algebraic error function.
In the illustrated embodiment, a C data structure keeps all of the coefficients for the algebraic expression, as follows:
The illustrated embodiment also utilizes a function which adds to the coefficients of the algebraic expression (those functions take as input a 3D point (x,y,z) and a corresponding plane—characterized by px,py,pz,pt):
The illustrated embodiment also utilizes a function which computes the error at a given pose (a,b,c,d,tx,ty,tz) where (a,b,c,d) are quaternion representation of the 3D rotation (roll,pitch,yaw), and (tx,ty,tz) are the translation:
The illustrated embodiment also provide functions which compute the derivatives:
For example, the function which adds to the coefficients can be expressed in a manner consistent with the exemplary excerpts that follow; the complete function is evident in view of the teachings hereof and the Maple code provided:
Following, this a call is made to a function ptRayGenericPoly_addToVals( ) for each set of points, plane (characterized by (x,y,z,px,py,pz,pt)) and it accumulates the monomial coefficients into the sum error function.
Referring to optional step 36 of
In this regard, it will be appreciated that the mappings determined in step 30 (as well as in step 36) are decomposable into two separate effects: lens distortion, such as pincushioning and other image aberrations, which will remain constant if the camera 24 is nudged (because it is only a function of the lens and because, for example, the lens can be glued in place with respect to the CCD, CMOS or other image sensor), and the pose of the camera 24 with respect to the workspace. It is the latter—the pose—that will change if the camera is nudged. In step 36, that aspect of the mapping attributable to the pose of the camera in the workspace can be recomputed, e.g., without requiring a calibration plate, because the lens distortion is assumed to remains constant.
Illustrated system 10 preferably includes a “time out” feature that prevents undue delay in instances where an expected pattern is not detected in an image acquired during runtime phase 34. In this regard, the system simply treats a pattern that is not detected within a designated delay interval (e.g., set by the operator or otherwise) as not found and proceeds with position determination on the basis of the other, found patterns. This has the benefit of making the system 10 more robust with respect to missing features and its operation more timely.
Described above are methods and apparatus meeting the objects set forth above, among others. It will be appreciated that the methods and apparatus shown in the drawings and described above are merely examples of embodiments of the invention, and that other embodiments incorporating changes therein fall within the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3727034 | Pope | Apr 1973 | A |
3779178 | Riseley, Jr. | Dec 1973 | A |
3816722 | Sakoe et al. | Jun 1974 | A |
3936800 | Ejiri et al. | Feb 1976 | A |
3967100 | Shimomura | Jun 1976 | A |
3968475 | McMahon | Jul 1976 | A |
3978326 | Shimomura | Aug 1976 | A |
4000400 | Elder | Dec 1976 | A |
4011403 | Epstein et al. | Mar 1977 | A |
4115702 | Nopper | Sep 1978 | A |
4115762 | Akiyama et al. | Sep 1978 | A |
4183013 | Agrawala et al. | Jan 1980 | A |
4200861 | Hubach et al. | Apr 1980 | A |
4254400 | Yoda et al. | Mar 1981 | A |
4286293 | Jablonowski | Aug 1981 | A |
4300164 | Sacks | Nov 1981 | A |
4303851 | Mottier | Dec 1981 | A |
4382255 | Pretini | May 1983 | A |
4385322 | Hubach et al. | May 1983 | A |
4435837 | Abernathy | Mar 1984 | A |
4441124 | Heebner et al. | Apr 1984 | A |
4441206 | Kuniyoshi et al. | Apr 1984 | A |
4519041 | Fant et al. | May 1985 | A |
4534813 | Williamson et al. | Aug 1985 | A |
4541116 | Lougheed | Sep 1985 | A |
4545067 | Juvin et al. | Oct 1985 | A |
4570180 | Baier et al. | Feb 1986 | A |
4577344 | Warren et al. | Mar 1986 | A |
4581762 | Lapidus et al. | Apr 1986 | A |
4606065 | Beg et al. | Aug 1986 | A |
4617619 | Gehly | Oct 1986 | A |
4630306 | West et al. | Dec 1986 | A |
4631750 | Gabriel et al. | Dec 1986 | A |
4641349 | Flom et al. | Feb 1987 | A |
4688088 | Hamazaki et al. | Aug 1987 | A |
4706168 | Weisner | Nov 1987 | A |
4707647 | Coldren et al. | Nov 1987 | A |
4728195 | Silver | Mar 1988 | A |
4730260 | Mori et al. | Mar 1988 | A |
4731858 | Grasmueller et al. | Mar 1988 | A |
4736437 | Sacks et al. | Apr 1988 | A |
4742551 | Deering | May 1988 | A |
4752898 | Koenig | Jun 1988 | A |
4758782 | Kobayashi | Jul 1988 | A |
4764870 | Haskin | Aug 1988 | A |
4771469 | Wittenburg | Sep 1988 | A |
4776027 | Hisano et al. | Oct 1988 | A |
4782238 | Radl et al. | Nov 1988 | A |
4783826 | Koso | Nov 1988 | A |
4783828 | Sadjadi | Nov 1988 | A |
4783829 | Miyakawa et al. | Nov 1988 | A |
4799243 | Zepke | Jan 1989 | A |
4809077 | Norita et al. | Feb 1989 | A |
4821333 | Gillies | Apr 1989 | A |
4831580 | Yamada | May 1989 | A |
4847485 | Koelsch | Jul 1989 | A |
4860374 | Murakami et al. | Aug 1989 | A |
4860375 | McCubbrey et al. | Aug 1989 | A |
4876457 | Bose | Oct 1989 | A |
4876728 | Roth | Oct 1989 | A |
4891767 | Rzasa et al. | Jan 1990 | A |
4903218 | Longo et al. | Feb 1990 | A |
4907169 | Lovoi | Mar 1990 | A |
4908874 | Gabriel | Mar 1990 | A |
4912559 | Ariyoshi et al. | Mar 1990 | A |
4912659 | Liang | Mar 1990 | A |
4914553 | Hamada et al. | Apr 1990 | A |
4922543 | Ahlbom et al. | May 1990 | A |
4926492 | Tanaka et al. | May 1990 | A |
4932065 | Feldgajer | Jun 1990 | A |
4953224 | Ichinose et al. | Aug 1990 | A |
4955062 | Terui | Sep 1990 | A |
4959898 | Landman et al. | Oct 1990 | A |
4962423 | Yamada et al. | Oct 1990 | A |
4970653 | Kenue | Nov 1990 | A |
4972359 | Silver et al. | Nov 1990 | A |
4982438 | Usami et al. | Jan 1991 | A |
4998209 | Vuichard et al. | Mar 1991 | A |
5005126 | Haskin | Apr 1991 | A |
5012402 | Akiyama | Apr 1991 | A |
5012433 | Callahan et al. | Apr 1991 | A |
5012524 | LeBeau | Apr 1991 | A |
5027419 | Davis | Jun 1991 | A |
5046190 | Daniel et al. | Sep 1991 | A |
5054096 | Beizer | Oct 1991 | A |
5060276 | Morris et al. | Oct 1991 | A |
5063608 | Siegel | Nov 1991 | A |
5073958 | Imme | Dec 1991 | A |
5075864 | Sakai | Dec 1991 | A |
5081656 | Baker et al. | Jan 1992 | A |
5081689 | Meyer et al. | Jan 1992 | A |
5083073 | Kato | Jan 1992 | A |
5086478 | Kelly-Mahaffey et al. | Feb 1992 | A |
5090576 | Menten | Feb 1992 | A |
5091861 | Geller et al. | Feb 1992 | A |
5091968 | Higgins et al. | Feb 1992 | A |
5093867 | Hori et al. | Mar 1992 | A |
5097454 | Schwarz et al. | Mar 1992 | A |
5113565 | Cipolla et al. | May 1992 | A |
5115309 | Hang | May 1992 | A |
5119435 | Berkin | Jun 1992 | A |
5124622 | Kawamura et al. | Jun 1992 | A |
5133022 | Weideman | Jul 1992 | A |
5134575 | Takagi | Jul 1992 | A |
5143436 | Baylor et al. | Sep 1992 | A |
5145432 | Midland et al. | Sep 1992 | A |
5151951 | Ueda et al. | Sep 1992 | A |
5153925 | Tanioka et al. | Oct 1992 | A |
5155775 | Brown | Oct 1992 | A |
5159281 | Hedstrom et al. | Oct 1992 | A |
5159645 | Kumagai | Oct 1992 | A |
5164994 | Bushroe | Nov 1992 | A |
5168269 | Harlan | Dec 1992 | A |
5175808 | Sayre | Dec 1992 | A |
5179419 | Palmquist et al. | Jan 1993 | A |
5185810 | Freischlad | Feb 1993 | A |
5185855 | Kato et al. | Feb 1993 | A |
5189712 | Kajiwara et al. | Feb 1993 | A |
5201906 | Schwarz et al. | Apr 1993 | A |
5204944 | Wolberg et al. | Apr 1993 | A |
5206820 | Ammann et al. | Apr 1993 | A |
5208750 | Kurami et al. | May 1993 | A |
5216503 | Paik | Jun 1993 | A |
5225940 | Ishii et al. | Jul 1993 | A |
5230027 | Kikuchi | Jul 1993 | A |
5243607 | Masson et al. | Sep 1993 | A |
5253306 | Nishio | Oct 1993 | A |
5253308 | Johnson | Oct 1993 | A |
5259038 | Sakou et al. | Nov 1993 | A |
5265173 | Griffin et al. | Nov 1993 | A |
5271068 | Ueda et al. | Dec 1993 | A |
5287449 | Kojima | Feb 1994 | A |
5297238 | Wang et al. | Mar 1994 | A |
5297256 | Wolstenholme et al. | Mar 1994 | A |
5299269 | Gaborski et al. | Mar 1994 | A |
5301115 | Nouso | Apr 1994 | A |
5307419 | Tsujino et al. | Apr 1994 | A |
5311598 | Bose et al. | May 1994 | A |
5315388 | Shen et al. | May 1994 | A |
5319457 | Nakahashi et al. | Jun 1994 | A |
5327156 | Masukane et al. | Jul 1994 | A |
5329469 | Watanabe | Jul 1994 | A |
5337262 | Luthi et al. | Aug 1994 | A |
5337267 | Colavin | Aug 1994 | A |
5363507 | Nakayama et al. | Nov 1994 | A |
5367439 | Mayer et al. | Nov 1994 | A |
5367667 | Wahlquist et al. | Nov 1994 | A |
5371690 | Engel et al. | Dec 1994 | A |
5371836 | Mitomi et al. | Dec 1994 | A |
5387768 | Tzard et al. | Feb 1995 | A |
5388197 | Rayner | Feb 1995 | A |
5388252 | Dreste et al. | Feb 1995 | A |
5398292 | Aoyama | Mar 1995 | A |
5432525 | Maruo et al. | Jul 1995 | A |
5432712 | Chan | Jul 1995 | A |
5440699 | Farrand et al. | Aug 1995 | A |
5455870 | Sepai et al. | Oct 1995 | A |
5455933 | Schieve et al. | Oct 1995 | A |
5471312 | Watanabe et al. | Nov 1995 | A |
5475766 | Tsuchiya et al. | Dec 1995 | A |
5475803 | Stearns et al. | Dec 1995 | A |
5477138 | Erjavic et al. | Dec 1995 | A |
5481712 | Silver et al. | Jan 1996 | A |
5485570 | Busboom et al. | Jan 1996 | A |
5491780 | Fyles et al. | Feb 1996 | A |
5495424 | Tokura | Feb 1996 | A |
5495537 | Bedrosian et al. | Feb 1996 | A |
5496106 | Anderson | Mar 1996 | A |
5500906 | Picard et al. | Mar 1996 | A |
5506617 | Parulski et al. | Apr 1996 | A |
5506682 | Pryor | Apr 1996 | A |
5511015 | Flockencier | Apr 1996 | A |
5519784 | Vermeulen | May 1996 | A |
5519840 | Matias et al. | May 1996 | A |
5526050 | King et al. | Jun 1996 | A |
5528703 | Lee | Jun 1996 | A |
5529138 | Shaw et al. | Jun 1996 | A |
5532739 | Garakani et al. | Jul 1996 | A |
5539409 | Mathews et al. | Jul 1996 | A |
5544256 | Brecher et al. | Aug 1996 | A |
5548326 | Michael | Aug 1996 | A |
5550763 | Michael | Aug 1996 | A |
5550888 | Neitzel et al. | Aug 1996 | A |
5553859 | Kelly et al. | Sep 1996 | A |
5555312 | Shima et al. | Sep 1996 | A |
5557410 | Huber et al. | Sep 1996 | A |
5557690 | O'Gorman et al. | Sep 1996 | A |
5559551 | Sakamoto et al. | Sep 1996 | A |
5559904 | Holzmann | Sep 1996 | A |
5565918 | Homma et al. | Oct 1996 | A |
5566877 | McCormack | Oct 1996 | A |
5568563 | Tanaka et al. | Oct 1996 | A |
5574668 | Beaty | Nov 1996 | A |
5574801 | Collet-Beillon | Nov 1996 | A |
5581250 | Khvilivitzky | Dec 1996 | A |
5581625 | Connell | Dec 1996 | A |
5581632 | Koljonen et al. | Dec 1996 | A |
5583949 | Smith et al. | Dec 1996 | A |
5583954 | Garakani | Dec 1996 | A |
5586058 | Aloni et al. | Dec 1996 | A |
5592562 | Rooks | Jan 1997 | A |
5594859 | Palmer et al. | Jan 1997 | A |
5598345 | Tokura | Jan 1997 | A |
5602937 | Bedrosian et al. | Feb 1997 | A |
5608490 | Ogawa | Mar 1997 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5621811 | Roder et al. | Apr 1997 | A |
5627915 | Rosser et al. | May 1997 | A |
5640199 | Garakani et al. | Jun 1997 | A |
5640200 | Michael | Jun 1997 | A |
5642106 | Hancock et al. | Jun 1997 | A |
5642158 | Petry, III et al. | Jun 1997 | A |
5647009 | Aoki et al. | Jul 1997 | A |
5649032 | Burt et al. | Jul 1997 | A |
5652658 | Jackson et al. | Jul 1997 | A |
5657403 | Wolff et al. | Aug 1997 | A |
5673334 | Nichani et al. | Sep 1997 | A |
5675358 | Bullock et al. | Oct 1997 | A |
5676302 | Petry, III | Oct 1997 | A |
5684530 | White | Nov 1997 | A |
5696848 | Patti et al. | Dec 1997 | A |
5706355 | Raboisson et al. | Jan 1998 | A |
5715369 | Spoltman et al. | Feb 1998 | A |
5715385 | Stearns et al. | Feb 1998 | A |
5717785 | Silver | Feb 1998 | A |
5724439 | Mizuoka et al. | Mar 1998 | A |
5734807 | Sumi | Mar 1998 | A |
5739846 | Gieskes | Apr 1998 | A |
5740285 | Bloomberg et al. | Apr 1998 | A |
5742037 | Scola et al. | Apr 1998 | A |
5751853 | Michael | May 1998 | A |
5754679 | Koljonen et al. | May 1998 | A |
5757956 | Koljonen et al. | May 1998 | A |
5761326 | Brady et al. | Jun 1998 | A |
5761337 | Nishimura et al. | Jun 1998 | A |
5768443 | Michael et al. | Jun 1998 | A |
5793899 | Wolff et al. | Aug 1998 | A |
5796386 | Lipscomb et al. | Aug 1998 | A |
5796868 | Dutta-Choudhury | Aug 1998 | A |
5801966 | Ohashi | Sep 1998 | A |
5805722 | Cullen et al. | Sep 1998 | A |
5809658 | Jackson et al. | Sep 1998 | A |
5818443 | Schott | Oct 1998 | A |
5822055 | Tsai et al. | Oct 1998 | A |
5825483 | Michael et al. | Oct 1998 | A |
5825913 | Rostami et al. | Oct 1998 | A |
5835099 | Marimont | Nov 1998 | A |
5835622 | Koljonen et al. | Nov 1998 | A |
5845007 | Ohashi et al. | Dec 1998 | A |
5847714 | Naqvi et al. | Dec 1998 | A |
5848189 | Pearson et al. | Dec 1998 | A |
5850466 | Schott | Dec 1998 | A |
5859923 | Petry, III et al. | Jan 1999 | A |
5859924 | Liu et al. | Jan 1999 | A |
5861909 | Garakani et al. | Jan 1999 | A |
5866887 | Hashimoto et al. | Feb 1999 | A |
5870495 | Mancuso et al. | Feb 1999 | A |
5872870 | Michael | Feb 1999 | A |
5878152 | Sussman | Mar 1999 | A |
5880782 | Koyanagi et al. | Mar 1999 | A |
5900975 | Sussman | May 1999 | A |
5901241 | Koljonen et al. | May 1999 | A |
5909504 | Whitman | Jun 1999 | A |
5912768 | Sissom et al. | Jun 1999 | A |
5912984 | Michael et al. | Jun 1999 | A |
5917937 | Szeliski et al. | Jun 1999 | A |
5918196 | Jacobson | Jun 1999 | A |
5933523 | Drisko et al. | Aug 1999 | A |
5943441 | Michael | Aug 1999 | A |
5949901 | Nichani et al. | Sep 1999 | A |
5953130 | Benedict et al. | Sep 1999 | A |
5960125 | Michael et al. | Sep 1999 | A |
5961571 | Gorr et al. | Oct 1999 | A |
5974169 | Bachelder | Oct 1999 | A |
5974365 | Mitchell | Oct 1999 | A |
5978080 | Michael et al. | Nov 1999 | A |
5978502 | Ohashi | Nov 1999 | A |
5978521 | Wallack et al. | Nov 1999 | A |
5995649 | Marugame | Nov 1999 | A |
6002738 | Cabral et al. | Dec 1999 | A |
6002793 | Silver et al. | Dec 1999 | A |
6005965 | Tsuda et al. | Dec 1999 | A |
6016152 | Dickie | Jan 2000 | A |
6025854 | Hinz et al. | Feb 2000 | A |
6026172 | Lewis, Jr. et al. | Feb 2000 | A |
6026176 | Whitman | Feb 2000 | A |
6028626 | Aviv | Feb 2000 | A |
6067379 | Silver | May 2000 | A |
6069668 | Woodham, Jr. et al. | May 2000 | A |
6075881 | Foster et al. | Jun 2000 | A |
6081619 | Kazuhiko et al. | Jun 2000 | A |
6084631 | Tonkin et al. | Jul 2000 | A |
6118540 | Roy et al. | Sep 2000 | A |
6137893 | Michael et al. | Oct 2000 | A |
6141033 | Michael et al. | Oct 2000 | A |
6141040 | Toh | Oct 2000 | A |
6166811 | Long et al. | Dec 2000 | A |
6173070 | Michael et al. | Jan 2001 | B1 |
6188784 | Linker, Jr. | Feb 2001 | B1 |
6195102 | McNeil et al. | Feb 2001 | B1 |
6205233 | Morley et al. | Mar 2001 | B1 |
6205242 | Onoguchi et al. | Mar 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6215915 | Reyzin | Apr 2001 | B1 |
6226396 | Marugame et al. | May 2001 | B1 |
6236769 | Desai | May 2001 | B1 |
6259827 | Nichani | Jul 2001 | B1 |
6279579 | Riaziat et al. | Aug 2001 | B1 |
6282328 | Desai | Aug 2001 | B1 |
6295367 | Crabtree et al. | Sep 2001 | B1 |
6297844 | Schatz et al. | Oct 2001 | B1 |
6298149 | Nichani et al. | Oct 2001 | B1 |
6301396 | Michael et al. | Oct 2001 | B1 |
6301440 | Bolle et al. | Oct 2001 | B1 |
6304050 | Skaar et al. | Oct 2001 | B1 |
6307951 | Tanigawa et al. | Oct 2001 | B1 |
6308644 | Diaz | Oct 2001 | B1 |
6341016 | Malione | Jan 2002 | B1 |
6345105 | Nitta et al. | Feb 2002 | B1 |
6357588 | Room et al. | Mar 2002 | B1 |
6381366 | Taycher et al. | Apr 2002 | B1 |
6381375 | Reyzin | Apr 2002 | B1 |
6389029 | McAlear | May 2002 | B1 |
6396949 | Nichani | May 2002 | B1 |
6408109 | Silver et al. | Jun 2002 | B1 |
6442291 | Whitman | Aug 2002 | B1 |
6469734 | Nichani et al. | Oct 2002 | B1 |
6477275 | Melikian et al. | Nov 2002 | B1 |
6496204 | Nakamura | Dec 2002 | B1 |
6496220 | Landert et al. | Dec 2002 | B2 |
6516092 | Bachelder et al. | Feb 2003 | B1 |
6539107 | Michael et al. | Mar 2003 | B1 |
6594623 | Wang et al. | Jul 2003 | B1 |
6624899 | Clark | Sep 2003 | B1 |
6639624 | Bachelder et al. | Oct 2003 | B1 |
6658145 | Silver et al. | Dec 2003 | B1 |
6678394 | Nichani | Jan 2004 | B1 |
6681151 | Weinzimmer et al. | Jan 2004 | B1 |
6684402 | Wolff | Jan 2004 | B1 |
6690354 | Sze | Feb 2004 | B2 |
6701005 | Nichani | Mar 2004 | B1 |
6710770 | Tomasi et al. | Mar 2004 | B2 |
6718074 | Dutta-Choudhury et al. | Apr 2004 | B1 |
6720874 | Fufido et al. | Apr 2004 | B2 |
6724922 | Vilsmeier | Apr 2004 | B1 |
6728582 | Wallack | Apr 2004 | B1 |
6748104 | Bachelder et al. | Jun 2004 | B1 |
6751338 | Wallack | Jun 2004 | B1 |
6751361 | Wagman | Jun 2004 | B1 |
6756910 | Ohba et al. | Jun 2004 | B2 |
6768509 | Bradski | Jul 2004 | B1 |
6771808 | Wallack | Aug 2004 | B1 |
6791461 | Oku et al. | Sep 2004 | B2 |
6798925 | Wagman | Sep 2004 | B1 |
6816187 | Iwai et al. | Nov 2004 | B1 |
6816755 | Habibi et al. | Nov 2004 | B2 |
6850646 | Silver et al. | Feb 2005 | B1 |
6856698 | Silver et al. | Feb 2005 | B1 |
6903177 | Seo et al. | Jun 2005 | B2 |
6919549 | Bamji et al. | Jul 2005 | B2 |
6940545 | Ray et al. | Sep 2005 | B1 |
6963661 | Hattori et al. | Nov 2005 | B1 |
6971580 | Zhu et al. | Dec 2005 | B2 |
6990228 | Wiles et al. | Jan 2006 | B1 |
6993177 | Bachelder | Jan 2006 | B1 |
6999600 | Venetianer et al. | Feb 2006 | B2 |
7003136 | Harville | Feb 2006 | B1 |
7006669 | Lavagnino et al. | Feb 2006 | B1 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7085622 | Sadighi et al. | Aug 2006 | B2 |
7088236 | Sorensen | Aug 2006 | B2 |
7106898 | Bouguet et al. | Sep 2006 | B2 |
7146028 | Lestideau | Dec 2006 | B2 |
7204254 | Riaziat et al. | Apr 2007 | B2 |
7212228 | Utsumi et al. | May 2007 | B2 |
7239761 | Weilenmann | Jul 2007 | B2 |
7260241 | Fukuhara et al. | Aug 2007 | B2 |
7356425 | Krahnstoever et al. | Apr 2008 | B2 |
7360410 | Steinbichler et al. | Apr 2008 | B2 |
7373270 | Ohashi et al. | May 2008 | B2 |
7382895 | Bramblet et al. | Jun 2008 | B2 |
7400744 | Nichani et al. | Jul 2008 | B2 |
7403648 | Nakamura | Jul 2008 | B2 |
7414732 | Maidhof et al. | Aug 2008 | B2 |
7471846 | Steinberg et al. | Dec 2008 | B2 |
7508974 | Beaty et al. | Mar 2009 | B2 |
7538801 | Hu | May 2009 | B2 |
7539595 | Georgi et al. | May 2009 | B2 |
7609893 | Luo et al. | Oct 2009 | B2 |
7680323 | Nichani | Mar 2010 | B1 |
7777300 | Tews et al. | Aug 2010 | B2 |
20010010731 | Miyatake et al. | Aug 2001 | A1 |
20010030689 | Spinelli | Oct 2001 | A1 |
20020039135 | Heyden | Apr 2002 | A1 |
20020041698 | Ito et al. | Apr 2002 | A1 |
20020113862 | Center et al. | Aug 2002 | A1 |
20020118113 | Oku et al. | Aug 2002 | A1 |
20020118114 | Ohba et al. | Aug 2002 | A1 |
20020135483 | Merheim et al. | Sep 2002 | A1 |
20020191819 | Hashimoto et al. | Dec 2002 | A1 |
20030053660 | Heyden | Mar 2003 | A1 |
20030071199 | Esping et al. | Apr 2003 | A1 |
20030164892 | Shiraishi et al. | Sep 2003 | A1 |
20040017929 | Bramblet et al. | Jan 2004 | A1 |
20040045339 | Nichani et al. | Mar 2004 | A1 |
20040061781 | Fennell et al. | Apr 2004 | A1 |
20040153671 | Schuyler et al. | Aug 2004 | A1 |
20040218784 | Nichani et al. | Nov 2004 | A1 |
20040234118 | Astrom et al. | Nov 2004 | A1 |
20050089214 | Rubbert et al. | Apr 2005 | A1 |
20050105765 | Han et al. | May 2005 | A1 |
20050074140 | Grasso et al. | Jul 2005 | A1 |
20050196047 | Owechko et al. | Sep 2005 | A1 |
20070081714 | Wallack et al. | Apr 2007 | A1 |
20070127774 | Zhang et al. | Jun 2007 | A1 |
20100166294 | Marrion et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
0 265 302 | Sep 1987 | EP |
0265302 | Sep 1987 | EP |
0 341 122 | Apr 1989 | EP |
0341122 | Apr 1989 | EP |
0 527 632 | Feb 1993 | EP |
0 777 381 | Jun 1997 | EP |
0847030 | Jun 1998 | EP |
0 895 696 | Feb 1999 | EP |
0706062 | May 2001 | EP |
0817123 | Sep 2001 | EP |
2 598 019 | Oct 1987 | FR |
25980119 | Oct 1987 | FR |
62-56814 | Mar 1987 | JP |
62-056814 | Mar 1987 | JP |
8-201021 | Aug 1996 | JP |
2004-504077 | Feb 2004 | JP |
2004-239747 | Aug 2004 | JP |
2005-534026 | Nov 2005 | JP |
WO 9110968 | Jul 1991 | WO |
9511491 | Apr 1995 | WO |
WO 9511491 | Apr 1995 | WO |
WO 9521376 | Aug 1995 | WO |
WO 9522137 | Aug 1995 | WO |
9631047 | Mar 1996 | WO |
9638820 | Dec 1996 | WO |
WO 9721189 | Jun 1997 | WO |
WO 9722858 | Jun 1997 | WO |
WO 9724692 | Jul 1997 | WO |
WO 9724693 | Jul 1997 | WO |
WO 9739416 | Oct 1997 | WO |
9808208 | Jul 1998 | WO |
WO 9830890 | Jul 1998 | WO |
WO 9852349 | Nov 1998 | WO |
WO 9859490 | Dec 1998 | WO |
WO 9915864 | Apr 1999 | WO |
WO 9927456 | Jun 1999 | WO |
WO 9948000 | Sep 1999 | WO |
0175809 | Oct 2001 | WO |
0248971 | Jun 2002 | WO |
0295692 | Nov 2002 | WO |
WO 02099615 | Dec 2002 | WO |
WO 02100068 | Dec 2002 | WO |
WO 2007044629 | Apr 2007 | WO |
2008147355 | Dec 2008 | WO |
2010077524 | Jul 2010 | WO |
Number | Date | Country | |
---|---|---|---|
20070081714 A1 | Apr 2007 | US |