1. Field of the Invention
The invention relates to automated detection and inspection of objects being manufactured on a production line, and more particularly to the related fields of industrial machine vision and automated image analysis.
2. Description of the Related Art
Industrial manufacturing relies on automatic inspection of objects being manufactured. One form of automatic inspection that has been in common use for decades is based on optoelectronic technologies that use electromagnetic energy, usually infrared or visible light, photoelectric sensors, and some form of electronic decision making.
One well-known form of optoelectronic automatic inspection uses an arrangement of photodetectors. A typical photodetector has a light source and a single photoelectric sensor that responds to the intensity of light that is reflected by a point on the surface of an object, or transmitted along a path that an object may cross. A user-adjustable sensitivity threshold establishes a light intensity above which (or below which) an output signal of the photodetector will be energized.
One photodetector, often called a gate, is used to detect the presence of an object to be inspected. Other photodetectors are arranged relative to the gate to sense the light reflected by appropriate points on the object. By suitable adjustment of the sensitivity thresholds, these other photodetectors can detect whether certain features of the object, such as a label or hole, are present or absent. A decision as to the status of the object (for example, pass or fail) is made using the output signals of these other photodetectors at the time when an object is detected by the gate. This decision is typically made by a programmable logic controller (PLC), or other suitable electronic equipment.
Automatic inspection using photodetectors has various advantages. Photodetectors are inexpensive, simple to set up, and operate at very high speed (outputs respond within a few hundred microseconds of the object being detected, although a PLC will take longer to make a decision).
Automatic inspection using photodetectors has various disadvantages, however, including:
Another well-known form of optoelectronic automatic inspection uses a device that can capture a digital image of a two-dimensional field of view in which an object to be inspected is located, and then analyze the image and make decisions. Such a device is usually called a machine vision system, or simply a vision system. The image is captured by exposing a two-dimensional array of photosensitive elements for a brief period, called the integration or shutter time, to light that has been focused on the array by a lens. The array is called an imager and the individual elements are called pixels. Each pixel measures the intensity of light falling on it during the shutter time. The measured intensity values are then converted to digital numbers and stored in the memory of the vision system to form the image, which is analyzed by a digital processing element such as a computer, using methods well-known in the art to determine the status of the object being inspected.
In some cases the objects are brought to rest in the field of view, and in other cases the objects are in continuous motion through the field of view. An event external to the vision system, such as a signal from a photodetector, or a message from a PLC, computer, or other piece of automation equipment, is used to inform the vision system that an object is located in the field of view, and therefore an image should be captured and analyzed. Such an event is called a trigger.
Machine vision systems avoid the disadvantages associated with using an arrangement of photodetectors. They can analyze patterns of brightness reflected from extended areas, easily handle many distinct features on the object, accommodate line changeovers through software systems and/or processes, and handle uncertain and variable object locations.
Machine vision systems have disadvantages compared to an arrangement of photodetectors, including:
Machine vision systems have limitations that arise because they make decisions based on a single image of each object, located in a single position in the field of view (each object may be located in a different and unpredictable position, but for each object there is only one such position on which a decision is based). This single position provides information from a single viewing perspective, and a single orientation relative to the illumination. The use of only a single perspective often leads to incorrect decisions. It has long been observed, for example, that a change in perspective of as little as a single pixel can in some cases change an incorrect decision to a correct one. By contrast, a human inspecting an object usually moves it around relative to his eyes and the lights to make a more reliable decision.
Some prior art vision systems capture multiple images of an object at rest in the field of view, and then average those images to produce a single image for analysis. The averaging reduces measurement noise and thereby improves the decision making, but there is still only one perspective and illumination orientation, considerable additional time is needed, and the object must be brought to rest.
Some prior art vision systems that are designed to read alphanumeric codes, bar codes, or 2D matrix codes will capture multiple images and vary the illumination direction until either a correct read is obtained, or all variations have been tried. This method works because such codes contain sufficient redundant information that the vision system can be sure when a read is correct, and because the object can be held stationary in the field of view for enough time to try all of the variations. The method is generally not suitable for object inspection, and is not suitable when objects are in continuous motion. Furthermore, the method still provides only one viewing perspective, and the decision is based on only a single image, because information from the images that did not result in a correct read is discarded.
Some prior art vision systems are used to guide robots in pick-and-place applications where objects are in continuous motion through the field of view. Some such systems are designed so that the objects move at a speed in which the vision system has the opportunity to see each object at least twice. The objective of this design, however, is not to obtain the benefit of multiple perspectives, but rather to insure that objects are not missed entirely if conditions arise that temporarily slow down the vision system, such as a higher than average number of objects in the field of view. These systems do not make use of the additional information potentially provided by the multiple perspectives.
Machine vision systems have additional limitations arising from their use of a trigger signal. The need for a trigger signal makes the setup more complex—a photodetector must be mounted and adjusted, or software must be written for a PLC or computer to provide an appropriate message. When a photodetector is used, which is almost always the case when the objects are in continuous motion, a production line changeover may require it to be physically moved, which can offset some of the advantages of a vision system. Furthermore, photodetectors can only respond to a change in light intensity reflected from an object or transmitted along a path. In some cases, such a condition may not be sufficient to reliably detect when an object has entered the field of view.
Some prior art vision systems that are designed to read alphanumeric codes, bar codes, or two dimensional (2D) matrix codes can operate without a trigger by continuously capturing images and attempting to read a code. For the same reasons described above, such methods are generally not suitable for object inspection, and are not suitable when objects are in continuous motion.
Some prior art vision systems used with objects in continuous motion can operate without a trigger using a method often called self-triggering. These systems typically operate by monitoring one or more portions of captured images for a change in brightness or color that indicates the presence of an object. Self-triggering is rarely used in practice due to several limitations:
Many of the limitations of machine vision systems arise in part because they operate too slowly to capture and analyze multiple perspectives of objects in motion, and too slowly to react to events happening in the field of view. Since most vision systems can capture a new image simultaneously with analysis of the current image, the maximum rate at which a vision system can operate is determined by the larger of the capture time and the analysis time. Overall, one of the most significant factors in determining this rate is the number of pixels comprising the imager.
The time needed to capture an image is determined primarily by the number of pixels in the imager, for two basic reasons. First, the shutter time is determined by the amount of light available and the sensitivity of each pixel. Since having more pixels generally means making them smaller and therefore less sensitive, it is generally the case that increasing the number of pixels increases the shutter time. Second, the conversion and storage time is proportional to the number of pixels. Thus the more pixels one has, the longer the capture time.
For at least the last 25 years, prior art vision systems generally have used about 300,000 pixels; more recently some systems have become available that use over 1,000,000, and over the years a small number of systems have used as few as 75,000. Just as with digital cameras, the recent trend is to more pixels for improved image resolution. Over the same period of time, during which computer speeds have improved a million-fold and imagers have changed from vacuum tubes to solid state, machine vision image capture times generally have improved from about 1/30 second to about 1/60 second, only a factor of two. Faster computers have allowed more sophisticated analysis, but the maximum rate at which a vision system can operate has hardly changed.
Recently, CMOS imagers have appeared that allow one to capture a small portion of the photosensitive elements, reducing the conversion and storage time. Theoretically such imagers can support very short capture times, but in practice, since the light sensitivity of the pixels is no better than when the full array is used, it is difficult and/or expensive to achieve the very short shutter times that would be needed to make such imagers useful at high speed.
Due in part to the image capture time bottleneck, image analysis methods suited to operating rates significantly higher than 60 images per second have not been developed. Similarly, use of multiple perspectives, operation without triggers, production of appropriately synchronized output signals, and a variety of other useful functions have not been adequately considered in the prior art.
Recently, experimental devices called focal plane array processors have been developed in research laboratories. These devices integrate analog signal processing elements and photosensitive elements on one substrate, and can operate at rates in excess of 10,000 images per second. The analog signal processing elements are severely limited in capability compared to digital image analysis, however, and it is not yet clear whether such devices can be applied to automated industrial inspection.
Considering the disadvantages of an arrangement of photodetectors, and the disadvantages and limitations of current machine vision systems, there is a compelling need for systems and methods that make use of two-dimensional imagers and digital image analysis for improved detection and inspection of objects in industrial manufacturing.
The present invention provides systems and methods for automatic optoelectronic detection and inspection of objects, based on capturing digital images of a two-dimensional field of view in which an object to be detected or inspected may be located, and then analyzing the images and making decisions. These systems and methods analyze patterns of brightness reflected from extended areas, handle many distinct features on the object, accommodate line changeovers through software means, and handle uncertain and variable object locations. They are less expensive and easier to set up than prior art machine vision systems, and operate at much higher speeds. These systems and methods furthermore make use of multiple perspectives of moving objects, operate without triggers, provide appropriately synchronized output signals, and provide other significant and useful capabilities will become apparent to those skilled in the art.
While the present invention is directed primarily at applications where the objects are in continuous motion, and provides specific and significant advantages in those cases, it may also be used advantageously over prior art systems in applications where objects are brought to rest.
One aspect of the invention is an apparatus, called a vision detector, that can capture and analyze a sequence of images at higher speeds than prior art vision systems. An image in such a sequence that is captured and analyzed is called a frame. The rate at which frames are captured and analyzed, called the frame rate, is sufficiently high that a moving object is seen in multiple consecutive frames as it passes through the field of view (FOV). Since the objects moves somewhat between successive frames, it is located in multiple positions in the FOV, and therefore it is seen from multiple viewing perspectives and positions relative to the illumination.
Another aspect of the invention is a method, called dynamic image analysis, for inspection objects by capturing and analyzing multiple frames for which the object is located in the field of view, and basing a result on a combination of evidence obtained from each of those frames. The method provides significant advantages over prior art machine vision systems that make decisions based on a single frame.
Yet another aspect of the invention is a method, called visual event detection, for detecting events that may occur in the field of view. An event can be an object passing through the field of view, and by using visual event detection the object can be detected without the need for a trigger signal.
Additional aspects of the invention will become apparent by a study of the figures and detailed descriptions given herein.
One advantage of the methods and apparatus of the present invention for moving objects is that by considering the evidence obtained from multiple viewing perspectives and positions relative to the illumination, a vision detector is able to make a more reliable decision than a prior art vision system, just as a human inspecting an object may move it around relative to his eyes and the lights to make a more reliable decision.
Another advantage is that objects can be detected reliably without a trigger signal, such as a photodetector. This reduces cost and simplifies installation, and allows a production line to be switched to a different product by making a software change in the vision detector without having to manually reposition a photodetector.
Another advantage is that a vision detector can track the position of an object as it moves through the field of view, and determine its speed and the time at which it crosses some fixed reference point. Output signals can then be synchronized to this fixed reference point, and other useful information about the object can be obtained as taught herein.
In order to obtain images from multiple perspectives, it is desirable that an object to be detected or inspected moves no more than a small fraction of the field of view between successive frames, often no more than a few pixels. As taught herein, it is generally desirable that the object motion be no more than about one-quarter of the FOV per frame, and in typical embodiments no more than 5% or less of the FOV. It is desirable that this be achieved not by slowing down a manufacturing process but by providing a sufficiently high frame rate. In an example system the frame rate is at least 200 frames/second, and in another example the frame rate is at least 40 times the average rate at which objects are presented to the vision detector.
An exemplary system is taught that can capture and analyze up to 500 frames/second. This system makes use of an ultra-sensitive imager that has far fewer pixels than prior art vision systems. The high sensitivity allows very short shutter times using very inexpensive LED illumination, which in combination with the relatively small number of pixels allows very short image capture times. The imager is interfaced to a digital signal processor (DSP) that can receive and store pixel data simultaneously with analysis operations. Using methods taught herein and implemented by means of suitable software for the DSP, the time to analyze each frame generally can be kept to within the time needed to capture the next frame. The capture and analysis methods and apparatus combine to provide the desired high frame rate. By carefully matching the capabilities of the imager, DSP, and illumination with the objectives of the invention, the exemplary system can be significantly less expensive than prior art machine vision systems.
The method of visual event detection involves capturing a sequence of frames and analyzing each frame to determine evidence that an event is occurring or has occurred. When visual event detection used to detect objects without the need for a trigger signal, the analysis would determine evidence that an object is located in the field of view.
In an exemplary method the evidence is in the form of a value, called an object detection weight, that indicates a level of confidence that an object is located in the field of view. The value may be a simple yes/no choice that indicates high or low confidence, a number that indicates a range of levels of confidence, or any item of information that conveys evidence. One example of such a number is a so-called fuzzy logic value, further described herein. Note that no machine can make a perfect decision from an image, and so will instead make judgments based on imperfect evidence.
When performing object detection, a test is made for each frame to decide whether the evidence is sufficient that an object is located in the field of view. If a simple yes/no value is used, the evidence may be considered sufficient if the value is “yes”. If a number is used, sufficiency may be determined by comparing the number to a threshold. Frames where the evidence is sufficient are called active frames. Note that what constitutes sufficient evidence is ultimately defined by a human user who configures the vision detector based on an understanding of the specific application at hand. The vision detector automatically applies that definition in making its decisions.
When performing object detection, each object passing through the field of view will produce multiple active frames due to the high frame rate of the vision detector. These frames may not be strictly consecutive, however, because as the object passes through the field of view there may be some viewing perspectives, or other conditions, for which the evidence that the object is located in the field of view is not sufficient. Therefore it is desirable that detection of an object begins when a active frame is found, but does not end until a number of consecutive inactive frames are found. This number can be chosen as appropriate by a user.
Once a set of active frames has been found that may correspond to an object passing through the field of view, it is desirable to perform a further analysis to determine whether an object has indeed been detected. This further analysis may consider some statistics of the active frames, including the number of active frames, the sum of the object detection weights, the average object detection weight, and the like.
The above examples of visual event detection are intended to be illustrative and not comprehensive. Clearly there are many ways to accomplish the objectives of visual event detection within the spirit of the invention that will occur to one of ordinary skill.
The method of dynamic image analysis involves capturing and analyzing multiple frames to inspect an object, where “inspect” means to determine some information about the status of the object. In one example of this method, the status of an object includes whether or not the object satisfies inspection criteria chosen as appropriate by a user.
In some aspects of the invention dynamic image analysis is combined with visual event detection, so that the active frames chosen by the visual event detection method are the ones used by the dynamic image analysis method to inspect the object. In other aspects of the invention, the frames to be used by dynamic image analysis can be captured in response to a trigger signal.
Each such frame is analyzed to determine evidence that the object satisfies the inspection criteria. In one exemplary method, the evidence is in the form of a value, called an object pass score, that indicates a level of confidence that the object satisfies the inspection criteria. As with object detection weights, the value may be a simple yes/no choice that indicates high or low confidence, a number, such as a fuzzy logic value, that indicates a range of levels of confidence, or any item of information that conveys evidence.
The status of the object may be determined from statistics of the object pass scores, such as an average or percentile of the object pass scores. The status may also be determined by weighted statistics, such as a weighted average or weighted percentile, using the object detection weights. Weighted statistics effectively weight evidence more heavily from frames wherein the confidence is higher that the object is actually located in the field of view for that frame.
Evidence for object detection and inspection is obtained by examining a frame for information about one or more visible features of the object. A visible feature is a portion of the object wherein the amount, pattern, or other characteristic of emitted light conveys information about the presence, identity, or status of the object. Light can be emitted by any process or combination of processes, including but not limited to reflection, transmission, or refraction of a source external or internal to the object, or directly from a source internal to the object.
One aspect of the invention is a method for obtaining evidence, including object detection weights and object pass scores, by image analysis operations on one or more regions of interest in each frame for which the evidence is needed. In example of this method, the image analysis operation computes a measurement based on the pixel values in the region of interest, where the measurement is responsive to some appropriate characteristic of a visible feature of the object. The measurement is converted to a logic value by a threshold operation, and the logic values obtained from the regions of interest are combined to produce the evidence for the frame. The logic values can be binary or fuzzy logic values, with the thresholds and logical combination being binary or fuzzy as appropriate.
For visual event detection, evidence that an object is located in the field of view is effectively defined by the regions of interest, measurements, thresholds, logical combinations, and other parameters further described herein, which are collectively called the configuration of the vision detector and are chosen by a user as appropriate for a given application of the invention. Similarly, the configuration of the vision detector defines what constitutes sufficient evidence.
For dynamic image analysis, evidence that an object satisfies the inspection criteria is also effectively defined by the configuration of the vision detector.
One aspect of the invention includes determining a result comprising information about detection or inspection of an object. The result may be reported to automation equipment for various purposes, including equipment, such as a reject mechanism, that may take some action based on the report. In one example the result is an output pulse that is produced whenever an object is detected. In another example, the result is an output pulse that is produced only for objects that satisfy the inspection criteria. In yet another example, useful for controlling a reject actuator, the result is an output pulse that is produced only for objects that do not satisfy the inspection criteria.
Another aspect of the invention is a method for producing output signals that are synchronized to a time, shaft encoder count, or other event marker that indicates when an object has crossed a fixed reference point on a production line. A synchronized signal provides information about the location of the object in the manufacturing process, which can be used to advantage by automation equipment, such as a downstream reject actuator.
The invention will be more fully understood from the following detailed description, in conjunction with the accompanying figures, wherein:
Discussion of Prior Art
The objects move past a photodetector 130, which emits a beam of light 135 for detecting the presence of an object. Trigger signals 162 and 166 are sent from the photodetector to the PLC 140, and a machine vision system 150. On the leading edge of the trigger signal 166 the vision system 150 captures an image of the object, inspects the image to determine if the expected features are present, and reports the inspection results to the PLC via signal 160.
On the leading edge of the trigger signal 162 the PLC records the time and/or encoder count. At some later time the PLC receives the results of the inspection from the vision system, and may do various things with those results as appropriate. For example the PLC may control a reject actuator 170 via signal 164 to remove a defective object 116 from the conveyer. Since the reject actuator is generally downstream from the inspection point defined by the photodetector beam 135, the PLC must delay the signal 164 to the reject actuator until the defective part is in position in front of the reject actuator. Since the time it takes the vision system to complete the inspection is usually somewhat variable, this delay must be relative to the trigger signal 162, i.e. relative to the time and/or count recorded by the PLC. A time delay is appropriate when the conveyer is moving at constant speed; in other cases, an encoder is preferred.
In the example of
The trigger 230 is some event external to the vision system, such as a signal from a photodetector 130, or a message from a PLC, computer, or other piece of automation equipment.
The image capture step 230 starts by exposing a two-dimensional array of photosensitive elements, called pixels, for a brief period, called the integration or shutter time, to an image that has been focused on the array by a lens. Each pixel measures the intensity of light falling on it during the shutter time. The measured intensity values are then converted to digital numbers and stored in the memory of the vision system.
During the analyze step 240 the vision system operates on the stored pixel values using methods well-known in the art to determine the status of the object being inspected. During the report step 250, the vision system communicates information about the status of the object to appropriate automation equipment, such as a PLC.
A PLC 340 samples signals 330 and 333 from photodetectors 300 and 310 on the leading edge of signal 336 from photodetector 330 to determine the presence of features 120 and 124. If one or both features are missing, signal 164 is sent to reject actuator 170, suitably delayed based on encoder 180, to remove a defective object from the conveyer.
Basic Operation of Present Invention
In an alternate embodiment, the vision detector sends signals to a PLC for various purposes, which may include controlling a reject actuator.
In another embodiment, suitable in extremely high speed applications or where the vision detector cannot reliably detect the presence of an object, a photodetector is used to detect the presence of an object and sends a signal to the vision detector for that purpose.
In yet another embodiment there are no discrete objects, but rather material flows past the vision detector continuously, for example a web. In this case the material is inspected continuously, and signals are send by the vision detector to automation equipment, such as a PLC, as appropriate.
When a vision detector detects the presence of discrete objects by visual appearance, it is said to be operating in visual event detection mode. When a vision detector detects the presence of discrete objects using an external signal such as from a photodetector, it is said to be operating in external trigger mode. When a vision detector continuously inspects material, it is said to be operating in continuous analysis mode.
If capture and analysis are overlapped, the rate at which a vision detector can capture and analyze images is determined by the longer of the capture time and the analysis time. This is the “frame rate”.
The present invention allows objects to be detected reliably without a trigger signal, such as that provided by a photodetector 130. Referring to
Referring again to
Each analysis step first considers the evidence that an object is present. Frames where the evidence is sufficient are called active. Analysis steps for active frames are shown with a thick border, for example analysis step 540. In an illustrative embodiment, inspection of an object begins when an active frame is found, and ends when some number of consecutive inactive frames are found. In the example of
At the time that inspection of an object is complete, for example at the end of analysis step 548, decisions are made on the status of the object based on the evidence obtained from the active frames. In an illustrative embodiment, if an insufficient number of active frames were found then there is considered to be insufficient evidence that an object was actually present, and so operation continues as if no active frames were found. Otherwise an object is judged to have been detected, and evidence from the active frames is judged in order to determine its status, for example pass or fail. A variety of methods may be used to detect objects and determine status within the scope of the invention; some are described below and many others will occur to those skilled in the art.
Once an object has been detected and a judgment made, a report may be made to appropriate automation equipment, such as a PLC, using signals well-known in the art. In such a case a report step similar to step 250 in
Note that the report 560 may be delayed well beyond the inspection of subsequent objects such as 510. The vision detector uses well-known first-in first-out (FIFO) buffer methods to hold the reports until the appropriate time.
Once inspection of an object is complete, the vision detector may enter an idle step 580. Such a step is optional, but may be desirable for several reasons. If the maximum object rate is known, there is no need to be looking for an object until just before a new one is due. An idle step will eliminate the chance of false object detection at times when an object couldn't arrive, and will extend the lifetime of the illumination system because the lights can be kept off during the idle step.
In another embodiment, the report step is delayed in a manner equivalent to that shown in
Illustrative Apparatus
The DSP 900 can be any device capable of digital computation, information storage, and interface to other digital elements, including but not limited to a general-purpose computer, a PLC, or a microprocessor. It is desirable that the DSP 900 be inexpensive but fast enough to handle a high frame rate. It is further desirable that it be capable of receiving and storing pixel data from the imager simultaneously with image analysis.
In the illustrative embodiment of
The high frame rate desired by a vision detector suggests the use of an imager unlike those that have been used in prior art vision systems. It is desirable that the imager be unusually light sensitive, so that it can operate with extremely short shutter times using inexpensive illumination. It is further desirable that it be able to digitize and transmit pixel data to the DSP far faster than prior art vision systems. It is moreover desirable that it be inexpensive and have a global shutter.
These objectives may be met by choosing an imager with much higher light sensitivity and lower resolution than those used by prior art vision systems. In the illustrative embodiment of
It is desirable that the illumination 940 be inexpensive and yet bright enough to allow short shutter times. In an illustrative embodiment, a bank of high-intensity red LEDs operating at 630 nanometers is used, for example the HLMP-ED25 manufactured by Agilent Technologies. In another embodiment, high-intensity white LEDs are used to implement desired illumination.
In the illustrative embodiment of
As used herein an image capture device provides means to capture and store a digital image. In the illustrative embodiment of
As used herein an analyzer provides means for analysis of digital data, including but not limited to a digital image. In the illustrative embodiment of
As used herein an output signaler provides means to produce an output signal responsive to an analysis. In the illustrative embodiment of
It will be understood by one of ordinary skill that there are many alternate arrangements, devices, and software instructions that could be used within the scope of the present invention to implement an image capture device 980, analyzer 982, and output signaler 984.
A variety of engineering tradeoffs can be made to provide efficient operation of an apparatus according to the present invention for a specific application. Consider the following definitions:
From these definitions it can be seen that
To achieve good use of the available resolution of the imager, it is desirable that b is at to least 50%. For dynamic image analysis, n should be at least 2. Therefore it is further desirable that the object moves no more than about one-quarter of the field of view between successive frames.
In an illustrative embodiment, reasonable values might be b=75%, e=5%, and n=4. This implies that m≦5%, i.e. that one would choose a frame rate so that an object would move no more than about 5% of the FOV between frames. If manufacturing conditions were such that s=2, then the frame rate r would need to be at least approximately 40 times the object presentation rate p. To handle an object presentation rate of 5 Hz, which is fairly typical of industrial manufacturing, the desired frame rate would be at least around 200 Hz. This rate could be achieved using an LM9630 with at most a 3.3 millisecond shutter time, as long as the image analysis is arranged so as to fit within the 5 millisecond frame period. Using available technology, it would be feasible to achieve this rate using an imager containing up to about 40,000 pixels.
With the same illustrative embodiment and a higher object presentation rate of 12.5 Hz, the desired frame rate would be at least approximately 500 Hz. An LM9630 could handle this rate by using at most a 300 microsecond shutter.
In another illustrative embodiment, one might choose b=75%, e=15%, and n=5, so that m≦2%. With s=2 and p=5 Hz, the desired frame rate would again be at least approximately 500 Hz.
In an alternate embodiment, a rectangular array 1050 of 16 LEDs, including exemplary LED 1060, surrounds a lens 1080. The array is divided into four banks as shown, including example bank 1070.
The capabilities of dynamic image analysis according to the present invention can be enhanced in some cases by controlling the LEDs so that successive frames are captured with varying banks illuminated. By considering the evidence obtained from frames illuminated from varying directions, it is possible to reliably detect features that would be difficult to detect with a fixed illumination direction. Accordingly, the present invention allows analysis utilizing varying direction illumination with moving objects.
Fuzzy Logic Decision Making
A fuzzy logic value is a number between 0 and 1 that represents an estimate of confidence that some specific condition is true. A value of 1 signifies high confidence that the condition is true, 0 signifies high confidence that the condition is false, and intermediate values signify intermediate levels of confidence.
The more familiar binary logic is a subset of fuzzy logic, where the confidence values are restricted to just 0 and 1. Therefore, any embodiment described herein that uses fuzzy logic values can use as an alternative binary logic values, with any fuzzy logic method or apparatus using those values replaced with an equivalent binary logic method or apparatus.
Just as binary logic values are obtained from raw measurements by using a threshold, fuzzy logic values are obtained using a fuzzy threshold. Referring to
In an illustrative embodiment, a fuzzy threshold comprises two numbers shown on the x-axis, low threshold t0 1120, and high threshold t1 1122, corresponding to points on the function 1124 and 1126. The fuzzy threshold can be defined by the equation
Note that this function works just as well when t1<t0. Other functions can also be used for a fuzzy threshold, such as the sigmoid
where t and σ are threshold parameters. In embodiments where simplicity is a goal, a conventional binary threshold can be used, resulting in binary logic values.
Fuzzy decision making is based on fuzzy versions of AND 1140, OR 1150, and NOT 1160. A fuzzy AND of two or more fuzzy logic values is the minimum value, and a fuzzy OR is the maximum value. Fuzzy NOT of ƒ is 1−ƒ. Fuzzy logic is identical to binary when the fuzzy logic values are restricted to 0 and 1.
In an illustrative embodiment, whenever a hard true/false decision is needed, a fuzzy logic value is considered true if it is at least 0.5, false if it is less than 0.5.
It will be clear to one skilled in the art that there is nothing critical about the values 0 and 1 as used in connection with fuzzy logic herein. Any number could be used to represent high confidence that a condition is true, and any different number could be used to represent high confidence that the condition is false, with intermediate values representing intermediate levels of confidence.
Dynamic Image Analysis
1. Is an object, or a set of visible features of an object, located in the field of view?
2. If so, what is the status of the object?
Information comprising evidence that an object is located in the field of view is called an object detection weight. Information comprising evidence regarding the status of an object is called an object pass score. In various illustrative embodiments, the status of the object comprises whether or not the object satisfies inspection criteria chosen as appropriate by a user. In the following, an object that satisfies the inspection criteria is sometimes said to “pass inspection”.
In the illustrative embodiment of
In the illustrative embodiment of
In the example of
In one embodiment, an object is judged to have been detected if the number of active frames found exceeds some threshold. An another embodiment, an object is judged to have been detected if the total object detection weight over all active frames exceeds some threshold. These thresholds are set as appropriate for a given application (see
In the illustrative embodiment of
where the summation is over all active frames. The effect of this formula is to average the object pass scores, but to weight each score based on the confidence that the object really did appear in the corresponding frame.
In an alternate embodiment, an object is judged to pass inspection if the average of the object pass scores is at least 0.5. This is equivalent to a weighted average wherein all of the weights are equal.
In the example of
The weighted percentile method is based on the fraction Q(p) of total weight where the pass score is at least p:
The object is judged to pass if Q(p) is at least some threshold t. In the illustrative embodiment of
Useful behavior is obtained using different values of t. For example, if t=50%, the object is judged to pass inspection if the weighted median score is at least p. Weighted median is similar to weighted average, but with properties more appropriate in some cases. For higher values, for example t=90%, the object will be judged to pass inspection only if the overwhelming majority of the weight corresponds to active frames where the pass score is at least p. For t=100%, the object will be judged to pass inspection only if all of the active frames have a pass score that is at least p. The object may also be judged to pass inspection if Q(p) is greater than 0, which means that any active frame has frame a pass score that is at least p.
In another useful variation, the object is judged to pass inspection based on the total weight where the pass score is at least p, instead of the fraction of total weight.
In an alternate embodiment, a percentile method is used based on a count of the frames where the pass score is at least p. This is equivalent to a weighted percentile method wherein all of the weights are equal.
The above descriptions of methods for weighing evidence to determine whether an object has been detected, and whether it passes inspection, are intended as examples of useful embodiments, but do not limit the methods that can be used within the scope of the invention. For example, the exemplary constants 0.5 used above may be replaced with any suitable value. Many additional methods for dynamic image analysis will occur to those skilled in the art.
Software Elements of the Present Invention
As illustrated, classes with a dotted border, such as Gadget class 1400, are abstract base classes that do not exist by themselves but are used to build concrete derived classes such as Locator class 1420. Classes with a solid border represent dynamic objects that can be created and destroyed as needed by the user in setting up an application, using an HMI 830. Classes with a dashed border, such as Input class 1450, represent static objects associated with specific hardware or software resources. Static objects always exist and cannot be created or destroyed by the user.
All classes are derived from Gadget class 1400, and so all objects that are instances of the classes shown in
The act of analyzing a frame consists of running each Gadget once, in an order determined to guarantee that all logic inputs to a Gadget have been updated before the Gadget is run. In some embodiments, a Gadget is not run during a frame where its logic output is not needed.
The Photo class 1410 is the base class for all Gadgets whose logic output depends on the contents of the current frame. These are the classes that actually do the image analysis. Every Photo measures some characteristic of a region of interest (ROI) of the current frame. The ROI corresponds to a visible feature on the object to be inspected. This measurement is called the Photo's analog output. The Photo's logic output is computed from the analog output by means of a fuzzy threshold, called the sensitivity threshold, that is among its set of parameters that can be configured by a user. The logic output of a Photo can be used to provide evidence to be used in making judgments.
The Detector class 1430 is the base class for Photos whole primary purpose is to make measurements in an ROI and provide evidence to be used in making judgments. In an illustrative embodiment all Detector ROIs are circles. A circular ROI simplifies the implementation because there is no need to deal with rotation, and having only one ROI shape simplifies what the user has to learn. Detector parameters include the position and diameter of the ROI.
A Brightness Detector 1440 measures a weighted average or percentile brightness in the ROI. A Contrast Detector 1442 measures contrast in the ROI. An Edge Detector 1444 measures the extent to which the ROI looks like an edge in a specific direction. A Spot Detector 1446 measures the extent to which the ROI looks like a round feature such as a hole. A Template Detector 1448 measures the extent to which the ROI looks like a pre-trained pattern selected by a user. The operation of the Detectors is further described below.
The Locator class 1420 represents Photos that have two primary purposes. The first is to produce a logic output that can provide evidence for making judgments, and in this they can be used like any Detector. The second is to determine the location of an object in the field of view of a vision detector, so that the position of the ROI of other Photos can be moved so as to track the position of the object. Any Locator can be used for either or both purposes.
In an illustrative embodiment, a Locator searches a one-dimensional range in a frame for an edge. The search direction is normal to the edge, and is among the parameters to be configured by the user. The analog output of a Locator is similar to that for an Edge Detector. Locators are further described below.
The Input class 1450 represents input signals to the vision detector, such as an external trigger. The Output class 1452 represents output signals from the vision detector, such as might be used to control a reject actuator. There is one static instance of the Input class for each physical input, such as exemplary input signal 926 (
The Gate base class 1460 implements fuzzy logic decision making. Each Gate has one or more logic inputs than can be connected to the logic outputs of other Gadgets. Each logic input can be inverted (fuzzy NOT) by means of a parameter that a user can configure. An AND Gate 1462 implements a fuzzy AND operation, and an OR Gate 1464 implements a fuzzy OR operation.
The Judge class 1470 is the base class for two static objects, the ObjectDetect Judge 1472 and the ObjectPass Judge 1474. Judges implement dynamic image analysis by weighing evidence over successive frames to make the primary decisions. Each Judge has a logic input to which a user connects the logic output of a Photo or, more typically, a Gate that provides a logical combination of Gadgets, usually Photos and other Gates.
The ObjectDetect Judge 1472 decides if an object has been detected, and the ObjectPass Judge 1474 decides if it passes inspection. The logic input to the ObjectDetect Judge provides the object detection weight for each frame, and the logic input to the ObjectPass Judge provides the object pass score for each frame.
The logic output of the ObjectDetect Judge provides a pulse that indicates when a judgment has been made. In one mode of operation, called “output when processing”, the leading edge of the pulse occurs when the inspection of an object begins, for example at the end of analysis step 540 in
The logic output of the ObjectPass Judge provides a level that indicates whether the most recently inspected object passed. The level changes state when the inspection of an object is complete, for example at the end of analysis step 548.
A Locator 1520 is used to detect and locate the top edge of the object, and another Locator 1522 is used to detect and locate the right edge.
A Brightness Detector 1530 is used to help detect the presence of the object. In this example the background is brighter than the object, and the sensitivity threshold is set to distinguish the two brightness levels, with the logic output inverted to detect the darker object and not the brighter background.
Together the Locators 1520 and 1522, and the Brightness Detector 1530, provide the evidence needed to judge that an object has been detected, as further described below.
A Contrast Detector 1540 is used to detect the presence of the hole 1512. When the hole is absent the contrast would be very low, and when present the contrast would be much higher. A Spot Detector could also be used.
An Edge Detector 1560 is used to detect the presence and position of the label 1510. If the label is absent, mis-positioned horizontally, or significantly rotated, the analog output of the Edge Detector would be very low.
A Brightness Detector 1550 is used to verify that the correct label has been applied. In this example, the correct label is white and incorrect labels are darker colors.
As the object moves from left to right through the field of view of the vision detector, Locator 1522 tracks the right edge of the object and repositions Brightness Detector 1530, Contrast Detector 1540, Brightness Detector 1550, and Edge Detector 1560 to be at the correct position relative to the object. Locator 1520 corrects for any variation in the vertical position of the object in the field of view, repositioning the detectors based on the location of the top edge of the object. In general Locators can be oriented in any direction.
A user can manipulate Photos in an image view by using well-known NM techniques. A Photo can be selected by clicking with a mouse, and its ROI can be moved, resized, and rotated by dragging. Additional manipulations for Locators are described below.
Referring still to the wiring diagram of
The logic output of AND Gate 1610 represents the level of confidence that the top edge of the object has been detected, the right edge of the object has been detected, and the background has not been detected. When confidence is high that all three conditions are true, confidence is high that the object itself has been detected. The logic output of AND Gate 1610 is wired to the ObjectDetect Judge 1600 to be used as the object detection weight for each frame.
Since the logic input to the ObjectDetect Judge in this case depends on the current frame, the vision detector is operating in visual event detection mode. To operate in external trigger mode, an Input Gadget would be wired to ObjectDetect. To operate in continuous analysis mode, nothing would be wired to ObjectDetect.
The choice of Gadgets to wire to ObjectDetect is made by a user based on knowledge of the application. In the example of
In the wiring diagram, Contrast Detector “Hole” 1640, corresponding to Contrast Detector 1540, Brightness Detector “Label” 1650, corresponding to Brightness Detector 1550, and Edge Detector “LabelEdge” 1660, corresponding to Edge Detector 1560, are wired to AND Gate 1612. The logic output of AND Gate 1612 represents the level of confidence that all three image features have been detected, and is wired to ObjectPass Judge 1602 to provide the object pass score for each frame.
The logic output of ObjectDetect Judge 1600 is wired to AND Gate 1670. The logic output of ObjectPass Judge 1602 is inverted and also wired to AND Gate 1670. The ObjectDetect Judge is set to “output when done” mode, so a pulse appears on the logic output of ObjectDetect Judge 1600 after an object has been detected and inspection is complete. Since the logic output of ObjectPass 1602 has been inverted, this pulse will appear on the logic output of AND Gate 1670 only if the object has not passed inspection. The logic output of AND Gate 1670 is wired to an Output gadget 1680, named “Reject”, which controls an output signal from the vision detector than can be connected directly to a reject actuator 170. The Output Gadget 1680 is configured by a user to perform the appropriate delay 570 needed by the downstream reject actuator.
A user can manipulate Gadgets in a logic view by using well-known HMI techniques. A Gadget can be selected by clicking with a mouse, its position can be moved by dragging, and wires can be created by a drag-drop operation.
To aid the user's understanding of the operation of the vision detector, Gadgets and/or wires can change their visual appearance to indicate fuzzy logic values. For example, Gadgets and/or wires can be displayed red when the logic value is below 0.5, and green otherwise. In
One skilled in the art will recognize that a wide variety of objects can be detected and inspected by suitable choice, configuration, and wiring of Gadgets. One skilled in the art will also recognize that the Gadget class hierarchy is only one of many software techniques that could be used to practice the invention.
Image Analysis Methods for Detectors
where wi is the ith weight and zi is the corresponding pixel gray level. In the illustrative embodiment of
so that pixels near the center are weighted somewhat higher than those near the edge. One advantage of a center-weighted Brightness Detector is that if a bright feature happens to lie near the edge of the Detector's ROI, then slight variations in its position will not cause large variations in the analog output. In
In the illustrative embodiment of
In another illustrative embodiment, the analog output is defined by the function C(q), which is the gray level such that:
where q is a percentile chosen by a user. C is the inverse cumulative weighted distribution of gray levels. Various useful values of q are given in the following table:
In one embodiment of a Contrast Detector, the analog output is the standard deviation of the gray levels within the ROI. In an illustrative embodiment, the array of positive weights 1700 is used to compute a weighted standard deviation:
In another illustrative embodiment, the analog output is given by
C(qhi)−C(qlo) (12)
where the q values may be chosen by the user. Useful values are qhi=0.95, qlo=0.05.
In
The step kernel 1800, with values ki, can be considered to be the product of an ideal step edge template ei and a kernel of positive weights wi:
Note that the ideal step edge template values ei are +1 when ki>0, corresponding to the black on white region of step kernel 1800, and −1 when ki<0, corresponding to the white on black region of step kernel 1800.
Define contrast C and weighted normalized correlation R2 of the step kernel and a like-shaped ROI with pixel values zi as follows:
The contrast C uses the standard formula for weighted standard deviation, and R2 uses the standard formula for weighted normalized correlation, but simplified because for step kernel 1800
An orthogonal step kernel 1810 with values ki′ is also created that is identical to the step kernel 1800 but rotated 90 degrees. The ratio
is a reasonable estimate of the tangent of the angle between the actual and expected direction of an edge, particularly for small angles where D is also a good estimate of the angle itself. Note that an orthogonal step template 1810 doesn't need to be created—the values from the step template 1800 can be used, but corresponding to the pixels values in the ROI in a different order.
A weighted normalized correlation operation 1900 using ROI 1910 and step kernel 1920 computes R2. A contrast operation 1930 using ROI 1910 and step kernel 1920 computes C, which is converted by fuzzy threshold operation 1940 into a fuzzy logic value 1942 indicating the confidence that the contrast is above the noise level. Weighted correlation operations 1950 and 1952, using ROI 1910, step kernel 1920, and orthogonal step kernel 1922, and absolute value of arctangent of ratio operation 1960, compute D, which is converted by fuzzy threshold operation 1970 into a fuzzy logic value 1972 indicating the confidence that the angle between the expected and actual edge directions is small.
A fuzzy AND element 1980 operates on R2 and fuzzy logic values 1942 and 1972 to produce the analog output 1990 of the Edge Detector. Note that R2, being in the range 0-1, can be used directly as a fuzzy logic value. The analog output 1990 is in the range 0-1, but it can be multiplied by some constant, for example 100, if a different range is desired. Note that the logic output of an Edge Detector is derived from the analog output using the sensitivity threshold that all Photos have.
In
The use of ridge kernel 2000 is similar to that for step kernel 1800. The contrast C is computed using the same formula, but R2 uses a different formula because the sum of the kernel values is not 0:
Note that this formula reduces to the one used for step edges when the sum of the kernel values is 0.
A different method is used to determine the angle D between the actual and expected edge directions. A positive rotated ridge kernel 2020 with values ki+ is created with an edge direction θ+a, and a negative rotated ridge kernel 2010 with values ki− is created with an edge direction θ−a. A parabola is fit to the three points
The x coordinate of the minimum of the parabola is a good estimate of the angle D between the actual and expected edge directions.
R2 and the fuzzy logic values are used by fuzzy AND element 2180 to produce a ridge analog output 2192 for an Edge Detector that can detect ridge edges. For an Edge Detector that can detect either step or ridge edges, the ridge analog output 2192 and analog output 1990 from a step edge detector 2188 can be used by fuzzy OR element 2182 to produce a combined analog output 2190.
Name text box 2200 allows a user to view and enter a Gadget's name. Time label 2202 shows the time taken by the most recent run of a Gadget. Logic output label 2204 shows a Gadget's current logic output value, and may change color, shape, or other characteristic to distinguish between true (≧0.5) and false (<0.5). Invert checkbox 2206 allows a Gadget's logic output to be inverted.
Thumbs-up button 2210 and thumbs-down button 2212 are used for learning, as further described below (
Position controls 2220 are used to position a Photo in the field of view. Diameter spinner 2222 is used to change the diameter of a Detector. Direction controls 2224 are used to orient an Edge Detector to the expected edge direction. Position, diameter, and orientation can also be set by manipulation of graphics in an image view, for example the image view of
Edge type checkboxes 2230 are used to select the types of edges to be detected and the edge polarity. Dark-to-light step, light-to-dark step, dark ridge, and light ridge can be selected. Any combination of choices is allowed, except for choosing none.
Jiggle spinner 2240 allows the user to specify a parameter j such that the Edge Detector will be run at a set of positions ±j pixels around the specified position, and the position with the highest analog output will be used.
Sensitivity threshold controls 2250 allow the user to set the sensitivity fuzzy threshold of a Photo. Zero-point label 2251 shows value t0 1120 (
Contrast threshold controls 2260 allow the user to view the contrast C and set the contrast fuzzy thresholds 1940 and 2140. These controls operate in the same manner as the sensitivity threshold controls 2250.
Direction error controls 2270 allow the user to view the angle between the actual and expected edge directions D and set the direction fuzzy thresholds 1970 and 2170. These controls operate in the same manner as the sensitivity threshold controls 2250, except that the thermometer display fills from right-to left instead of left-to-right because lower values of D correspond to higher fuzzy logic values.
The use of spot kernel 2300 is similar to that for ridge kernel 2000. Weighted normalized correlation R2 and contrast C are computed using the same formulas as was used for the ridge kernel.
Methods and Human-Machine Interface for Locators
In an illustrative embodiment, a Locator searches a one-dimensional range for an edge, using any of a variety of well-known techniques. The search direction is normal to the edge, and a Locator has a width parameter that is used to specify smoothing along the edge, which is used in well-known ways. The analog output of a Locator depends on the particular method used to search for the edge.
In an illustrative embodiment, a Locator searches a one-dimensional range for an edge using the well-known method of computing a projection of the ROI parallel to the edge, producing a one-dimensional profile along the search range. The one-dimensional profile is convolved with a one-dimensional edge kernel, and the location of the peak response corresponds to the location of the edge. A interpolation, such as the well-known parabolic interpolation, can be used if desired to improve the edge location accuracy. In another embodiment, an edge can be located by searching for a peak analog output using the edge detector of
In another embodiment, a Locator searches a multi-dimensional range, using well-known methods, that may include translation, rotation, and size degrees of freedom. It will be clear to one skilled in the art how to employ multi-dimensional Locators to position Photos in practicing the invention, so the following discussion will be limited to one-dimensional Locators, which are preferred due to their simplicity.
Detector 2510 and Locator 2512 can be moved around in the FOV by clicking anywhere on their border and dragging. Detector 2510 has a resize handle 2520 for changing its diameter, and Locator 2512 has a resize handle 2522 for changing its width and range, and a rotate handle 2524 for changing its direction. All Photos can be moved by dragging the border, and have similar handles as appropriate to their operation.
In the illustrative embodiment of
A Locator has a rail 2532, shown in the Figure as a dashed line, that is coincident with the plunger but extending in both directions to the edge of the image view.
Every Photo can be linked to zero or more locators, up to some maximum number determined by the particular embodiment of the invention. The number of links determines the number of degrees of freedom that the Locators can control. Degrees of freedom include rotation, size, and the two degrees of freedom of translation. In an illustrative embodiment, the maximum number of links is two and only the translation degrees of freedom are controlled.
A linkage defines how a Photo moves as the Locator's plunger moves, following an edge in the image. The movements are defined to keep the Photo at a constant relative distance to the rail or rails of the locators to which it is linked. In an illustrative embodiment the linkages are drawn using a mechanical analogy, such that one could actually build a linkage out of structural elements and bearings and the Photos would move in the same way as forces are applied to the plungers.
In
Every photo has an emitter, a diamond-shaped handle drawn somewhere on the border. For example Detector 2510 has emitter 2550 and Locator 2512 has emitter 2552. A link is created by drag-dropping a Photo's emitter to any point on a Locator. If the link already exists, the drag-drop might delete the link, or another mechanism for deleting might be used. The user may not create more than the maximum number of allowable links from any Photo, nor any circular dependencies. To aid the user during an emitter drag over a Locator, a tool tip can be provided to tell the user whether a link would be created, deleted, or rejected (and why).
Dragging a Locator does not change the behavior of its plunger—it stays locked on an edge if it can find one, or reverts to the center if not. Thus dragging a locator while an edge is detected just changes its search range; the plunger does not move relative to the FOV. More generally, dragging a Locator never changes the position of any Photo to which it is linked. Dragging a Locator will adjust the rod lengths as necessary to insure that no other Photo moves relative to the FOV.
Any plunger may be dragged manually within the range of its Locator, whether or not it has found an edge, and any linked Photos will move accordingly. This allows users to see the effect of the linkages. As soon as the mouse button is released, the plunger will snap back to its proper position (moving linked Photos back as appropriate).
In
Comparing second image view 2602 with first image view 2600, first plunger 2624 has moved down as it follows a first edge (not shown) in the image, and second plunger 2634 has moved to the left and slightly down as it follows a second edge (not shown). Note that the positions in the FOV of Locators 2620 and 2630 have not changed, but Detector 2610 has moved down and to the left to follow the plungers, which is following the edges of an object and therefore following the motion of the object itself.
In our mechanical analogy, Detector 2610 moves because it is rigidly attached to first rail 2626 by first rod 2622, and to second rail 2636 by second rod 2632. Note that first slider 2628 has slid to the left along first rail 2626, and second slider 2638 has slid down along second rail 2636. The sliders slide along the rails when two non-orthogonal Locators are linked to a Photo.
If a Photo is linked to two nearly parallel Locators, its motion would be unstable. It is useful to set an angle limit between the Locators, below which the linked Photo will not be moved. This state can be indicated in some way in the image view, such as by displaying the two rods using a special color such as red.
The ability to have Locators either at fixed positions or linked to other Locators provides important flexibility. In
Note that there need be no limit on the number of Photos that can be linked to a Locator; the degree of freedom limit is on the number of links one Photo can have to Locators. In the example of
The Locators are configured to follow the top and right edges of a circular feature 2750. Comparing second image view 2702 with first image view 2700, the circular feature 2750 has moved down, causing rail 2722 to move down to follow it. This moves both Detector 2710 and second Locator 2730 down. Note that Detector 2710 is at the same position relative to the object, and so is second Locator 2730. This is desirable in this case, because if second Locator 2730 were fixed in the FOV, it might miss the right edge of circular feature 2750 as it moves up and down. Note that this would not be problematic if the edge of an object in the image was a straight line.
First Locator 2720 has no Locator to move it left and right so as to find the top edge of circular feature 2750. It can't link to second Locator 2730 because that would create a circular chain of links, which is not allowed because one Locator has to run first and it can't be linked to anything. Instead, the motion of the object through the FOV insures that first Locator 2720 will find the top edge. In the example of
To handle cases like this, Locators have a parameter that can be used to specify the number of parallel sweeps to be made in searching for the edge. The sweeps are spaced apart along the edge by an amount that provides sufficient overlap so that the edge won't fall between the cracks of the sweeps.
Referring still to
First Detector 2910 is linked to nearby first Locator 2920 and second Locator 2922, and will be positioned properly even if the object rotates or changes size (as long as the change is not too big). But second Detector 2912 is too far away—a rotation would tend to mis-position second Detector 2912 vertically relative to second Locator 2922, and a size change would tend to mis-position it horizontally relative to first Locator 2920. Third Locator 2924 is used instead of second Locator 2922 to get the vertical position of second Detector 2912, allowing overall object rotation to be handled. The remote first Locator 2920 is used to get horizontal position for second Detector 2912, so the object size should not vary much. If size variation needs to be handled in addition to rotation, one would add a fourth Locator, near second Detector 2912 and oriented horizontally.
Comparing second image view 2902 with first image view 2900, the object (not shown) has moved to the right and rotated counterclockwise, which can be seen by the motion of the Detectors as the Locators follow the object edges. Note that second Locator 2922 and third Locator 2924 are linked to first Locator 2920 so that they stay close to the Detectors.
Alternate Logic View Using Ladder Diagrams
In an illustrative embodiment, to render a ladder diagram for a configuration of Gadgets one rung is created for each Gadget that has logic inputs: Gates, Judges, and Outputs. The order of the rungs is the automatically determined run order for the Gadgets. Each rung consists of one contact for each logic input, followed by an icon for the Gadget. Contacts are normally open for non-inverted connections and normally closed for inverted connections. For AND Gates the contacts are in series, and for OR Gates the contacts are in parallel. Labels associated with each contact indicate the name of the Gadget that is connected to the logic input.
To simplify the ladder diagram, the user may choose to hide any Gate whose logic output is connected to exactly one logic input of another Gadget via a non-inverted connection. When a Gate is hidden, the rung for that Gate is not displayed. Instead the contacts for that Gate replace the normally open contact that would have appeared on the one rung in which that Gate is used.
Referring to
Similarly, rung 3010 shows normally open contacts Hole 3040, Label 3042, and LabelEdge 3044 connected to an AND Gate (hidden AND Gate 1612), whose output is connected to ObjectPass Judge 3012.
Rung 3020 shows normally open contact 3050 and normally closed contact 3052 connected to an AND Gate (hidden AND Gate 1670), whose output is connected to Output Gadget 3022 named “Reject”.
Note that the ladder diagrams used by the above-described embodiment of the invention are a limited subset of ladder diagrams widely used in industry. They are provided primarily to aid users who are familiar with ladder diagrams. The subset is chosen to match the capabilities of wiring diagrams, which simplifies the implementation and also allows a user to choose between wiring and ladder diagrams as need arises.
In another embodiment, a more complete implementation of ladder diagrams is provided and wiring diagrams are not used. The result combines the capabilities of a vision detector with those of a PLC, resulting in a powerful industrial inspection machine but at a cost of increased complexity.
Marking, Synchronized Outputs, and Related Measurements
As discussed above, objects are detected either by visual event detection or external trigger (continuous analysis mode is used when there are no discrete objects). Furthermore, objects may be presented either by indexing, in which case they come to a stop in the FOV, or continuous motion. When using an external trigger the analysis is typically the same regardless of how objects are presented. For visual event detection, however, the analysis may depend on whether the object will come to a stop (indexing) or be in roughly uniform motion (continuous). For example, a vision detector may not be able to measure or use object speed in an indexed application.
Visual event detection is a novel capability and suggests novel output signal control, particularly when continuous object presentation is used. It is desirable that a vision detector be able to control some external actuator, either directly or by serving as input to a PLC. This suggests, for continuous presentation at least, that the timing of output signals be related with reasonable precision to the point in time when the object passed a particular, fixed point in the production flow. In the example of
For prior art vision systems this goal is addressed by the external trigger, which is typically a photodetector that responds within microseconds of the mark time. This signal, which triggers the vision system (e.g. signal 166 in
The present invention, when used in visual event detection mode, can provide outputs synchronized to reasonable precision with the mark time, whether it controls the actuator directly or is used by a PLC. One concern, however, is that like a vision system and unlike a photodetector, a vision detector makes its decision about the object many milliseconds after the mark time. Furthermore, the delay may be quite variable, depending on how many frames were analyzed and, to a lesser extent, when in the acquire/process cycle the mark time occurs.
If ObjectDetect logic output 3140 and ObjectPass logic output 3150 are wired to an AND Gate, a pulse (not shown) will be produced only when a object that passes inspection is detected. If the logic output of ObjectPass is inverted, the AND Gate will produce a pulse (not shown) only when a object that fails inspection is detected.
The detect pulse 3170, and pulses indicating passing object detected and failing object detected, are all useful. In an indexed application they might be used directly by actuators. A PLC can use an external trigger to synchronize these pulses with actuators. But when objects are in continuous motion and no external trigger is used, the pulses often cannot be used directly to control actuators because of the variable decision delay 3130.
The invention solves this problem by measuring the mark time 3100 and then synchronizing an output pulse 3180 on output signal 3160 to it. The output pulse 3180 occurs at a fixed output delay 3120 from mark time 3100. Referring also to the timing diagram in
The act of measuring the mark time is called marking. The mark time can be determined to sub-millisecond accuracy by linear interpolation, least-squares fit, or other well-known methods, using the known times (counts) at which the images were captured and the known positions of the object as determined by appropriate Locators. Accuracy will depend on shutter time, overall acquisition/processing cycle time, and object speed.
In an illustrative embodiment a user chooses one Locator whose search range is substantially along the direction of motion to be used for marking. The mark point is arbitrarily chosen to be the center point of the Locator's range—as discussed above, the mark point is an imaginary reference point whose exact position doesn't matter as long as it is fixed. The user can achieve the desired synchronization of output signals by adjusting the delay from this arbitrary time. If an object is detected that does not cross the mark point during the active frames, the mark time can be based on an extrapolation and the accuracy may suffer.
The user may as an alternative specify that the mark time occurs when the object is first detected. This option might be selected in applications where Locators are not being used, for example when visual event detection relies on Detectors placed in fixed positions in the FOV (see
Note that output signals can only be synchronized to the mark time if output delay 3120 is longer than the longest expected decision delay 3130. Thus the actuator should be sufficiently downstream of the mark point, which is expected to be the case in almost all applications.
When an external trigger is used, the mark time is relative to the time at which the trigger occurs (e.g. mark time 680 in
Location row 3240 shows the location of an edge measured by a Locator, oriented to have a search range substantially in the direction of motion and chosen by a user. The location is measured relative to the center of the Locator's search range, and is shown only for the active frames. It can be seen that the location crosses zero, the mark point, somewhere between frames 61 and 62, which is between times 44.2 and 46.2, and counts 569 and 617.
In this example, dynamic image analysis ends on frame 66, after two consecutive inactive frames are found. A mark time based on location, shown in sixth row 3250, is computed at the end of frame 66. The value shown, 45.0, is the linear interpolated time between frames 61 and 62 where the location crosses zero. As an alternative, a line can be fit to the points for the active frames from time row 3220 and location row 3240, and that line can be used to calculate the time value corresponding to location 0.
A mark time based on the time that the object was first detected is shown in the seventh row 3260.
A considerable amount of additional useful information can be obtained from the measured data, summarized in the following table:
As can be seen, the information can be computed using two points (linear interpolation using zero-crossing frames 61 and 62 for the mark, and slope using frames 59 and 64 for speed and size), or by a least-squares fit, or by other methods known in the art. The results are similar for the two methods shown, with the least-squares method generally being more accurate but more complex.
Mark time and count may be used for output signal synchronization as explained above. Object speed may be communicated to automation equipment for a variety of purposes. The pixel size calculation gives a calibration of the size of a pixel in encoder counts, which are proportional to physical distance along the production line. Such a calibration can be used for a variety of well-known purposes, including presenting distances in the FOV in physical units for the user, and transporting a setup between vision detectors that may have slightly different optical magnifications by adjusting the size and position of Photos based on the calibrations.
Since pixel size can be calculated for every object, the value can be used to determine object distance. Smaller pixel sizes correspond to objects that are farther away. For constant speed production lines the same determination can be made using object speed, just as when looking out a car window distant objects appear to be moving slower than nearby ones.
The data in
Referring to
Judges, Outputs, and Examples of Use
Presentation control 3400 allows selection of either indexed or continuous object presentation.
Frame filtering controls 3410 allow the user to set limits on the count or total object detection weight of active frames. Minimum frame spinner 3412 allows a user to choose the minimum required count or weight threshold, as explained above in the description of
Idle time controls 3420 allow a user to specify minimum and maximum times for idle step 580 (
Missing frame spinner 3430 allows a user to specify the maximum number of consecutive inactive frames that will be accepted without terminating dynamic image analysis. Such a frame is illustrated by analysis step 542 in
Marking control 3440 allows a user to select the marking mode. If marking by location is selected, the user must specify a Locator using locator list control 3442.
Output mode control 3450 allows a user to select the mode that defines when a pulse appears on the logic output.
Frame count spinner 3460 allows a user to select the number of frames to analyze in external trigger mode.
ObjectDetect Judge 3720 is configured for continuous presentation, mark by location, and output when done, as shown in
A first parameter view 3700 shows that “DetectOut” Output Gadget 3730 is configured to generate a synchronized 10 ms. pulse 25 ms. after the mark time. This pulse will be triggered by the rising edge of ObjectDetect Judge 3720. A second parameter view 3710 shows that “PassOut” Output Gadget 3750 is configured to send the pass/fail result from ObjectPass Judge 3740 directly to its output signal.
A PLC could sense the two output signals, noting the time of the rising edge of the signal from “DetectOut” and latching the output of “PassOut” on that edge. The PLC would then know that an object has been detected, when it crossed the mark point (25 ms. before the rising edge of “DetectOut”), and the result of the inspection.
First parameter view 3900 and second parameter view 3910 show that both Output Gadgets pass their logic inputs straight through to their output signals. A PLC could sense the two output signals and latch the output of“PassOut” on the rising edge of “DetectOut”. The PLC would then be alerted that an object has been detected and the result of the inspection, and it would have to obtain the mark time, if needed, from the external trigger.
Another way to configure the invention to operate in external trigger mode when connected to a PLC would be to use the Output Gadget configuration of
Continuous Analysis and Examples
Locator 4120 and Edge Detector 4122 are configured to inspect the web. If the web breaks, folds over, or becomes substantially frayed at either edge, then Locator 4120 and/or Edge Detector 4122 will produce a false output (logic value <0.5). If the web moves up or down Locator 4120 will track the top edge and keep Edge Detector 4122 in the right relative position to detect the bottom edge. However, if the width of the web changes substantially, Edge Detector 4142 will produce a false output.
In a logic view “Top” Locator 4140 represents Locator 4120, and “Bottom” Detector 4150 represents Edge Detector 4122. These are wired to AND Gate 4160, whose logic output is inverted and wired to “DetectOut” Output gadget 4170. As can be seen in parameter view 4130, the inverted output of AND Gate 4160 is passed directly to an output signal.
The output signal will therefore be asserted whenever either Photo's logic output is false. The signal will be updated at the high frame rate of the vision detector, providing a continuous indication of the status of the web.
Parameter view 4200 shows how the ObjectDetect Judge is configured. In this application there are no discrete objects, but an “object” is defined to a stretch of defective web. The output mode is set to “output when processing”, so one output pulse is produced for each stretch of defective web, whose duration is the duration of the defect.
By setting the minimum frame count to 3, insignificant stretches of defective web are filtered out and not detected. By allowing up to 3 missing frames, insignificant stretches of good web immersed in a longer defective portion are also filtered out. Thus the output signal will be similar to that of
There is no maximum frame count specified, so a stretch of defective web can continue indefinitely and be considered one defect. The idle time may be set to zero so that the web is always being inspected.
Note in parameter view 4200 that the ObjectDetect Judge is configured to set the mark time to the time that the “object”, here a stretch of defective web, is first detected. As can be seen in parameter view 4300, a 50 ms. output pulse is generated 650 encoder counts after the mark time.
Detecting Objects without Using Locators
A logic view shows “Right” Brightness Detector 4440 corresponding to right Brightness Detector ROI 4432, and “Left” Brightness Detector 4442 corresponding to left Brightness Detector ROI 4430. “Right” Brightness Detector 4440 produces a true logic output when object 4400 is not covering right Brightness Detector ROI 4432, because the background in this example is brighter than object 4400. “Left” Brightness Detector 4442 produces a true logic output when object 4400 is covering left Brightness Detector ROI 4430, because its output is inverted. Therefore AND Gate 4460 produces a true logic output when the right edge of object 4400 is between left Brightness Detector ROI 4430 and right Brightness Detector ROI 4432.
Note that the logic output of AND Gate 4460 is actually a fuzzy logic level that will fall between 0 and 1 when the right edge of object 4400 partially covers either ROI. Contrast Detector ROI 4420 must be large enough to detect feature 4410 over the range of positions defined by the separation between left Brightness Detector ROI 4430 and right Brightness Detector ROI 4432, because since no locators are being used it will not be moved.
AND Gate 4460 is wired to the ObjectDetect Judge, and “Hole” Contrast Detector 4470, corresponding to Contrast Detector ROI 4420, is wired to the ObjectPass Judge. The Judges in this example are configured for visual event detection and direct control of a reject actuator, as in
Note that in the example of
Learning
In an illustrative embodiment Photos can learn appropriate settings for the sensitivity fuzzy thresholds. The learning process may also provide suggestions as to what Detectors to use and, where appropriate, what settings are likely to work best. Learning is by example—users will present objects that they have judged to be good or bad, and will so indicate by interaction with HMI 830.
Learning is optional, as default or manual settings can be used, but is strongly recommended for the Brightness and Contrast Detectors because their analog outputs are in physical units (e.g. gray levels) that have no absolute meaning. Learning is less critical for the Edge, Spot, and Template Detectors, because their outputs are primarily based on a normalized correlation value that is dimensionless and with absolute meaning.
For example, if an Edge or Spot Detector has an analog output of 80, one can be fairly confident that an edge or spot has indeed been detected, because the correlation coefficient of the image ROI with an ideal edge or spot template is at least 0.8. If the output is 25, one can be fairly confident that no edge or spot has been detected. But with, for example, a Brightness Detector, is 80 bright enough? is 25 dark enough? This is best learned by example in most instances.
There are two parts to the learning process. The first concerns how users interact with HMI 830 to teach a particular example. The second concerns what a Photo does when presented with such an example.
Referring to
Clicking green thumbs-up button 2210 means “learn that the logic output of this Photo should now be green (true).” This operation is called learn thumbs-up. Clicking red thumbs-down button 2212 means “learn that the logic output of this Photo should now be red (false).” This operation is called learn thumbs-down. These semantics are intended to be clear and unambiguous, particularly when the ability to invert the output comes into play. The terms “good” and “bad”, often used in describing example objects used for learning, change meaning depending on whether the output is inverted and how it is used by Gates to which it is connected.
Suppose one is using three Detectors that all must have a true output for an object to pass inspection. One can teach each Detector individually, but this could be unnecessarily cumbersome. If a good object is presented, all three Detectors should be true and so a single click somewhere to say, “this is a good object” would be useful. On the other hand, if a bad object is presented, there is often no way to know which Detectors are supposed to be false, and so they are generally taught individually.
Likewise, if the three Detectors are OR'd instead, meaning that an object passes if any of them have a true output, then one might teach a bad object with a single click, but for a good object the Detectors are generally trained individually. Once again, however, these rules change as inputs and outputs are inverted.
Teaching multiple detectors with one click can be managed without confusion by adding thumbs-up and thumbs-down buttons to Gates, following rules shown in
An AND Gate with a non-inverted output learns thumbs-up by telling Gadgets wired to its non-inverted inputs to learn thumbs-up, and Gadgets wired to its inverted inputs to learn thumbs-down. For example, non-inverted AND Gate 4500 learns thumbs-up by telling Photo 4502 to learn thumbs-up, AND Gate 4504 to learn thumbs-up, and Photo 4506 to learn thumbs-down. An AND Gate with a non-inverted output cannot learn thumbs-down, so that button can be disabled and it ignores requests to do so.
An AND Gate with an inverted output learns thumbs-down by telling Gadgets wired to its non-inverted inputs to learn thumbs-up, and Gadgets wired to its inverted inputs to learn thumbs-down. For example, inverted AND Gate 4510 learns thumbs-down by telling Photo 4512 to learn thumbs-up, Photo 4514 to learn thumbs-up, and OR Gate 4516 to learn thumbs-down. An AND Gate with an inverted output cannot learn thumbs-up, so that button can be disabled and it ignores requests to do so.
An OR Gate with a non-inverted output learns thumbs-down by telling Gadgets wired to its non-inverted inputs to learn thumbs-down, and Gadgets wired to its inverted inputs to learn thumbs-up. For example, non-inverted OR Gate 4520 learns thumbs-down by telling OR Gate 4522 to learn thumbs-down, Photo 4524 to learn thumbs-down, and Photo 4526 to learn thumbs-up. An OR Gate with a non-inverted output cannot learn thumbs-up, so that button can be disabled and it ignores requests to do so.
An OR Gate with an inverted output learns thumbs-up by telling Gadgets wired to its non-inverted inputs to learn thumbs-down, and Gadgets wired to its inverted inputs to learn thumbs-up. For example, inverted OR Gate 4530 learns thumbs-up by telling Photo 4532 to learn thumbs-down, Photo 4534 to learn thumbs-down, and AND Gate 4536 to learn thumbs-up. An OR Gate with an inverted output cannot learn thumbs-down, so that button can be disabled and it ignores requests to do so.
Photos that are told to learn thumbs-up or thumbs-down by a Gate act as if the equivalent button was clicked. Gates pass the learn command back to their inputs as just described. All other Gadgets ignore learning commands in this example.
One exception to the above rules is that any Gate that has exactly one input wired to Photos, either directly or indirectly through other Gates, can learn thumbs-up and thumbs-down, so both buttons are enabled. Learn commands for such Gates are passed back to said input, inverted if the Gate's output or said input is inverted, but not both.
Users need not remember or understand these rules. In essence, the only rule to remember is to click the color one wants the output to be. Whenever the mouse is over a thumbs button, a tool tip could be provided to tell exactly which Photos will be trained, or explains why the button is disabled. When a thumbs button is clicked, a clear but non-intrusive feedback can be provided to confirm that training has indeed occurred.
In the example of
The statistics are used to compute the Photo's sensitivity fuzzy threshold 4650, defined by low threshold t0 1120 and high threshold t1 1122 (see also
t0=mlo+kσlo
t1=mhi−kσhi (21)
The parameter k may be chosen as appropriate. In an illustrative embodiment, k=0.
The method for learning thumbs-up or thumbs-down is summarized by the following steps:
Photos can also retain the most recent manual threshold settings, if any, so that they can be restored if desired by the user. All statistics and settings can be saved for all Photos on HMI 830, so that learning can continue over multiple sessions. Users can clear and examine the statistics using appropriate HMI commands.
If a set of statistics contains no examples, default values for the mean can be used. In an illustrative embodiment, the defaults are
mlo=alo+0.1(ahi−alo)
mhi=alo+0.9(ahi−alo) (22)
where alo is the lowest possible analog output and ahi is the highest. If a set of statistics contains fewer than two examples, the standard deviation can be assumed to be 0. This means that learning form a single example can be allowed, although it is generally not encouraged.
In another illustrative embodiment, a learn command for a Detector computes the analog outputs that would result from each type of Detector operating on the ROI of the Detector being taught. Statistics for each Detector type are computed and stored, and used to suggest better Detector choices by looking for larger separations between the output low and output high examples.
Using a Phase-Locked Loop for Missing and Extra Object Detection
In one embodiment, a signal containing output pulses synchronized to the mark time, for example output signal 3160 containing output pulse 3180 (
In an illustrative embodiment, a software PLL internal to vision detector DSP 900 (
A best-fit line 4830 is computed using a weighted least-squares method further described below. The weights are chosen to weight more recent points more strongly than more distant points. The slope of best-fit line 4830 gives the object presentation period, and the time corresponding to count=1 gives the expected time of the next object.
In one embodiment, a fixed number of the most recent points are given equal weight, and older points are given zero weight, so that only recent points are used. The set of recent points being used is stored in a FIFO buffer, and well-known methods are used to update the least-squares statistics as new points are added and old points are removed.
In an illustrative embodiment, weights 4840 corresponding to the impulse response of a discrete Butterworth filter are used. A discrete Butterworth filter is two-pole, critically-damped, infinite impulse response digital low-pass filter that is easily implemented in software. It has excellent low-pass and step response characteristics, considers the entire history of mark times, has one adjustable parameter to control frequency response, and does not need a FIFO buffer.
The output yi at count i of a Butterworth filter with inputs xi is
vi=vi-1+f(xi−yi-1)−f′vi-1 (23)
where f is the filter parameter,
f′=2√{square root over (f)} (24)
and vi are intermediate terms called velocity terms.
If the input is a unit impulse x0=1, xi=0 for i≠0, the output is the impulse response, which will be referred to by the symbol wi. The effect of the filter is to convolve the input with the impulse response, which produces a weighted average of all previous inputs, where the weights are w−i. Weights 4840 are the values w−i for f=0.12.
For the Butterworth PLL, three Butterworth filters are used, corresponding to the statistics needed to compute a least-squares best-fit line. Let the mark times be ti, where i=0 corresponds to the most recent object, i<0 to previous objects, and i=1 to the next object. Furthermore, let all times be relative to t0, i.e. t0=0. The following weighted sums are needed:
Note that the summations are over the range −∞≦i≦0. These values are obtained as the outputs of the three Butterworth filters, which are given inputs ti, iti, and ti2.
The following additional values are needed, and are derived from the filter parameter f:
If best fit line 4830 is given by ti=ai+b, then
The value a is a very accurate measure of the current object presentation period. The equation of the line can be used to calculate the expected mark time at any point in the near future. The weighted RMS error of the best-fit line, as a fraction of object presentation period, is
which is a measure of the variability in the object presentation rate.
A new mark time to be input to the three filters occurs at time t1, which is simply the elapsed time between the new object and the most recent at t0=0. For each of the three Butterworth filters, adjustments must be made to the output and velocity terms to reset i=0 and t0=0 for the new object. This is necessary for equations 26 to be correct, and also for numerical accuracy. Without this correction, C would change with every input. Here are equations for the three filters, showing the corrections. Primes are used to indicate new values of the outputs and velocity terms, and the u terms are temporary values.
v′t=vt+f(t1−yt)−f′vt
ut=yt+v′t
y′t=ut−t1
uxt=vxt+f(t1−yxt)−f′vxt
v′xt=uxt−t1−v′t
y′xt=yxt+v′xt−ut−yxt1
ut2=vt2+f(t12−yt2)−f′vt2
v′t2=ut2−2t1v′t
y′t2=tt2+ut2−2t1ut+t12 (29)
A Butterworth PLL's output and velocity terms can be initialized to correspond to a set of values (a, b, E) as follows:
This allows one to reset the PLL to start at a particular period p using equations 30 with a=p, b=0, and E=0. It also allows one to change the filter constant while the PLL is running by initializing the output and velocity terms using the current values of a, b, and E, and the new value of f.
In an illustrative embodiment, the Butterworth PLL determines whether or not it is locked onto the input mark times by considering the number of objects that have been seen since the PLL was reset, and the current value of E. The PLL is considered unlocked on reset and for the next n objects. Once at least n objects have been seen, the PLL locks when E goes below a lock threshold El, and unlocks when E goes above an unlock threshold Eu. In a further improvement, an unlocked filter parameter fu is used when the PLL is unlocked, and a locked filter parameter fl is used when the filter is locked. Switching filter parameters is done using equations 30 to initialize the output and velocity terms using the current values of a, b, and E.
In an illustrative embodiment, n=8, El=0.1, Eu=0.2, fl=0.05, and fu=0.25.
If m consecutive objects are missed, the next object will arrive at approximately
tm+1=a(m+1)+b (31)
From this equation and the mark time tn of a new object, one may determine m:
To keep the equations for the Butterworth PLL correct, one must insert mark times for all of the missing objects. This may be accomplished by using
m+1 times as input to equations 29.
It will be understood that any discrete low-pass filter can be used in place of a Butterworth filter to implement a PLL according to the best-fit line method of the present invention. Based on the method described above, equations 23 through 33 would be replaced as appropriate for the filter chosen.
The foregoing has been a detailed description of various embodiments of the invention. It is expressly contemplated that a wide range of modifications and additions can be made hereto without departing from the spirit and scope of this invention. For example, the processors and computing devices herein are exemplary and a variety of processors and computers, both standalone and distributed can be employed to perform computations herein. Likewise, the imager and other vision components described herein are exemplary and improved or differing components can be employed within the teachings of this invention. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
This application is a divisional of U.S. patent application Ser. No. 11/136,103, titled “Method and Apparatus for Locating Objects,” filed on May 24, 2005, which is a continuation application of U.S. patent application Ser. No. 10/865,155, titled “Method and Apparatus for Visual Detection and Inspection of Objects,” filed on Jun. 9, 2004, the entire contents of both are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4214265 | Olesen | Jul 1980 | A |
4292666 | Hill et al. | Sep 1981 | A |
4384195 | Nosler | May 1983 | A |
4647979 | Urata | Mar 1987 | A |
4679075 | Williams et al. | Jul 1987 | A |
4847772 | Michalopoulos et al. | Jul 1989 | A |
4916640 | Gasperi | Apr 1990 | A |
4962538 | Eppler et al. | Oct 1990 | A |
4972494 | White et al. | Nov 1990 | A |
5018213 | Sikes | May 1991 | A |
5040056 | Sager et al. | Aug 1991 | A |
5121201 | Seki | Jun 1992 | A |
5146510 | Cox et al. | Sep 1992 | A |
5164998 | Reinsch et al. | Nov 1992 | A |
5177420 | Wada | Jan 1993 | A |
5184217 | Doering | Feb 1993 | A |
5198650 | Wike et al. | Mar 1993 | A |
5210798 | Ekchian et al. | May 1993 | A |
5233541 | Corwin et al. | Aug 1993 | A |
5262626 | Goren et al. | Nov 1993 | A |
5271703 | Lindqvist et al. | Dec 1993 | A |
5286960 | Longacre, Jr. et al. | Feb 1994 | A |
5298697 | Suzuki et al. | Mar 1994 | A |
5317645 | Perozek et al. | May 1994 | A |
5345515 | Nishi et al. | Sep 1994 | A |
5365596 | Dante et al. | Nov 1994 | A |
5420409 | Longacre, Jr. et al. | May 1995 | A |
5476010 | Fleming et al. | Dec 1995 | A |
5481712 | Silver et al. | Jan 1996 | A |
5500732 | Ebel et al. | Mar 1996 | A |
5562788 | Kitson et al. | Oct 1996 | A |
5581625 | Connell et al. | Dec 1996 | A |
5687249 | Kato | Nov 1997 | A |
5717834 | Werblin et al. | Feb 1998 | A |
5734742 | Asaeda | Mar 1998 | A |
5742037 | Scola et al. | Apr 1998 | A |
5751831 | Ono | May 1998 | A |
5802220 | Black et al. | Sep 1998 | A |
5809161 | Auty et al. | Sep 1998 | A |
5825483 | Michael et al. | Oct 1998 | A |
5852669 | Eleftheriadis et al. | Dec 1998 | A |
5872354 | Hanson | Feb 1999 | A |
5917602 | Bonewitz et al. | Jun 1999 | A |
5929418 | Ehrhart et al. | Jul 1999 | A |
5932862 | Hussey et al. | Aug 1999 | A |
5937096 | Kawai | Aug 1999 | A |
5942741 | Longacre et al. | Aug 1999 | A |
5943432 | Gilmore et al. | Aug 1999 | A |
5960097 | Pfeiffer et al. | Sep 1999 | A |
5960125 | Michael et al. | Sep 1999 | A |
5966457 | Lemelson | Oct 1999 | A |
6046764 | Kirby et al. | Apr 2000 | A |
6049619 | Anandan et al. | Apr 2000 | A |
6061471 | Coleman, Jr. et al. | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6072882 | White et al. | Jun 2000 | A |
6075882 | Mullins et al. | Jun 2000 | A |
6078251 | Landt et al. | Jun 2000 | A |
6088467 | Sarpeshkar et al. | Jul 2000 | A |
6115480 | Washizawa | Sep 2000 | A |
6158661 | Chadima, Jr. et al. | Dec 2000 | A |
6160494 | Sodi et al. | Dec 2000 | A |
6161760 | Marrs | Dec 2000 | A |
6169535 | Lee | Jan 2001 | B1 |
6169600 | Ludlow | Jan 2001 | B1 |
6173070 | Michael et al. | Jan 2001 | B1 |
6175644 | Scola et al. | Jan 2001 | B1 |
6175652 | Jacobson et al. | Jan 2001 | B1 |
6184924 | Schneider et al. | Feb 2001 | B1 |
6215892 | Douglass et al. | Apr 2001 | B1 |
6282462 | Hopkins | Aug 2001 | B1 |
6285787 | Kawachi et al. | Sep 2001 | B1 |
6298176 | Longacre, Jr. et al. | Oct 2001 | B2 |
6301610 | Ramser et al. | Oct 2001 | B1 |
6333993 | Sakamoto | Dec 2001 | B1 |
6346966 | Toh | Feb 2002 | B1 |
6347762 | Sims et al. | Feb 2002 | B1 |
6360003 | Doi et al. | Mar 2002 | B1 |
6396517 | Beck et al. | May 2002 | B1 |
6396949 | Nichani | May 2002 | B1 |
6408429 | Marrion et al. | Jun 2002 | B1 |
6446868 | Robertson et al. | Sep 2002 | B1 |
6483935 | Rostami et al. | Nov 2002 | B1 |
6487304 | Szeliski | Nov 2002 | B1 |
6525810 | Kipman | Feb 2003 | B1 |
6526156 | Black et al. | Feb 2003 | B1 |
6539107 | Michael et al. | Mar 2003 | B1 |
6545705 | Sigel et al. | Apr 2003 | B1 |
6549647 | Skunes et al. | Apr 2003 | B1 |
6573929 | Glier et al. | Jun 2003 | B1 |
6580810 | Yang et al. | Jun 2003 | B1 |
6587122 | King et al. | Jul 2003 | B1 |
6597381 | Eskridge et al. | Jul 2003 | B1 |
6608930 | Agnihotri et al. | Aug 2003 | B1 |
6618074 | Seeley et al. | Sep 2003 | B1 |
6621571 | Maeda et al. | Sep 2003 | B1 |
6625317 | Gaffin et al. | Sep 2003 | B1 |
6628805 | Hansen et al. | Sep 2003 | B1 |
6629642 | Swartz et al. | Oct 2003 | B1 |
6646244 | Aas et al. | Nov 2003 | B2 |
6668075 | Nakamura | Dec 2003 | B1 |
6677852 | Landt | Jan 2004 | B1 |
6681151 | Weinzimmer et al. | Jan 2004 | B1 |
6714213 | Lithicum et al. | Mar 2004 | B1 |
6741977 | Nagaya et al. | May 2004 | B1 |
6753876 | Brooksby et al. | Jun 2004 | B2 |
6761316 | Bridgelall | Jul 2004 | B2 |
6766414 | Francis | Jul 2004 | B2 |
6774917 | Foote et al. | Aug 2004 | B1 |
6816063 | Kubler | Nov 2004 | B2 |
6817982 | Fritz et al. | Nov 2004 | B2 |
6891570 | Tantalo et al. | May 2005 | B2 |
6919793 | Heinrich | Jul 2005 | B2 |
6944584 | Tenney et al. | Sep 2005 | B1 |
6947151 | Fujii et al. | Sep 2005 | B2 |
6973209 | Tanaka | Dec 2005 | B2 |
6985827 | Williams et al. | Jan 2006 | B2 |
6987528 | Nagahisa et al. | Jan 2006 | B1 |
6997556 | Pfleger | Feb 2006 | B2 |
6999625 | Nelson et al. | Feb 2006 | B1 |
7062071 | Tsujino et al. | Jun 2006 | B2 |
7066388 | He | Jun 2006 | B2 |
7070099 | Patel | Jul 2006 | B2 |
7085401 | Averbuch et al. | Aug 2006 | B2 |
7088387 | Freeman et al. | Aug 2006 | B1 |
7088846 | Han et al. | Aug 2006 | B2 |
7097102 | Patel et al. | Aug 2006 | B2 |
7175090 | Nadabar | Feb 2007 | B2 |
7181066 | Wagman | Feb 2007 | B1 |
7227978 | Komatsuzaki et al. | Jun 2007 | B2 |
7266768 | Ferlitsch et al. | Sep 2007 | B2 |
7271830 | Robins et al. | Sep 2007 | B2 |
7274808 | Baharav et al. | Sep 2007 | B2 |
7280685 | Beardsley et al. | Oct 2007 | B2 |
7516898 | Knowles et al. | Apr 2009 | B2 |
7604174 | Gerst et al. | Oct 2009 | B2 |
7657081 | Blais et al. | Feb 2010 | B2 |
7720364 | Scott et al. | May 2010 | B2 |
7751625 | Ulrich et al. | Jul 2010 | B2 |
7889886 | Matsugu et al. | Feb 2011 | B2 |
7960004 | Yee et al. | Jun 2011 | B2 |
7973663 | Hall | Jul 2011 | B2 |
7984854 | Nadabar | Jul 2011 | B2 |
8108176 | Nadabar et al. | Jan 2012 | B2 |
20010042789 | Krichever et al. | Nov 2001 | A1 |
20020005895 | Freeman et al. | Jan 2002 | A1 |
20020099455 | Ward | Jul 2002 | A1 |
20020109112 | Guha et al. | Aug 2002 | A1 |
20020122582 | Masuda et al. | Sep 2002 | A1 |
20020177918 | Pierel et al. | Nov 2002 | A1 |
20020181405 | Ying | Dec 2002 | A1 |
20020196336 | Batson et al. | Dec 2002 | A1 |
20020196342 | Walker et al. | Dec 2002 | A1 |
20030062418 | Barber et al. | Apr 2003 | A1 |
20030095710 | Tessadro | May 2003 | A1 |
20030113018 | Nefian et al. | Jun 2003 | A1 |
20030120714 | Wolff et al. | Jun 2003 | A1 |
20030137590 | Bames et al. | Jul 2003 | A1 |
20030201328 | Jam et al. | Oct 2003 | A1 |
20030219146 | Jepson et al. | Nov 2003 | A1 |
20030227483 | Schultz et al. | Dec 2003 | A1 |
20040148057 | Breed et al. | Jul 2004 | A1 |
20040201669 | Guha et al. | Oct 2004 | A1 |
20040218806 | Miyamoto et al. | Nov 2004 | A1 |
20050184217 | Kong et al. | Aug 2005 | A1 |
20050226490 | Phillips et al. | Oct 2005 | A1 |
20050254106 | Silverbrook et al. | Nov 2005 | A9 |
20050257646 | Yeager | Nov 2005 | A1 |
20050275728 | Mirtich et al. | Dec 2005 | A1 |
20050275831 | Silver | Dec 2005 | A1 |
20050275833 | Silver | Dec 2005 | A1 |
20050275834 | Silver | Dec 2005 | A1 |
20050276445 | Silver et al. | Dec 2005 | A1 |
20050276459 | Eames et al. | Dec 2005 | A1 |
20050276460 | Silver et al. | Dec 2005 | A1 |
20050276461 | Silver et al. | Dec 2005 | A1 |
20050276462 | Silver et al. | Dec 2005 | A1 |
20060022052 | Patel et al. | Feb 2006 | A1 |
20060043303 | Safai et al. | Mar 2006 | A1 |
20060056732 | Holmes | Mar 2006 | A1 |
20060107211 | Mirtich et al. | May 2006 | A1 |
20060107223 | Mirtich et al. | May 2006 | A1 |
20060131419 | Nunnink | Jun 2006 | A1 |
20060133757 | Nunnink | Jun 2006 | A1 |
20060146337 | Hartog | Jul 2006 | A1 |
20060146377 | Marshall et al. | Jul 2006 | A1 |
20060223628 | Walker et al. | Oct 2006 | A1 |
20060249581 | Smith et al. | Nov 2006 | A1 |
20060249587 | Choi et al. | Nov 2006 | A1 |
20060283952 | Wang | Dec 2006 | A1 |
20070009152 | Kanda | Jan 2007 | A1 |
20070146491 | Tremblay et al. | Jun 2007 | A1 |
20070181692 | Barkan et al. | Aug 2007 | A1 |
20080004822 | Nadabar et al. | Jan 2008 | A1 |
20080011855 | Nadabar | Jan 2008 | A1 |
20080036873 | Silver | Feb 2008 | A1 |
20080063245 | Benkley et al. | Mar 2008 | A1 |
20080166015 | Haering et al. | Jul 2008 | A1 |
20080167890 | Pannese et al. | Jul 2008 | A1 |
20080205714 | Benkley | Aug 2008 | A1 |
20080219521 | Benkley | Sep 2008 | A1 |
20080226132 | Gardner et al. | Sep 2008 | A1 |
20080278584 | Shih et al. | Nov 2008 | A1 |
20080285802 | Bramblet et al. | Nov 2008 | A1 |
20080309920 | Silver | Dec 2008 | A1 |
20080310676 | Silver | Dec 2008 | A1 |
20090121027 | Nadabar | May 2009 | A1 |
20090128627 | Katsuyama et al. | May 2009 | A1 |
20090154779 | Satyan et al. | Jun 2009 | A1 |
20090257621 | Silver | Oct 2009 | A1 |
20090273668 | Mirtich et al. | Nov 2009 | A1 |
20100241901 | Jahn | Sep 2010 | A1 |
20100241911 | Shih | Sep 2010 | A1 |
20100241981 | Mirtich et al. | Sep 2010 | A1 |
20100318936 | Tremblay et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
10007985 | Sep 2000 | DE |
10012715 | Sep 2000 | DE |
10040563 | Feb 2002 | DE |
0939382 | Sep 1999 | EP |
0815688 | May 2000 | EP |
0896290 | Oct 2004 | EP |
1469420 | Oct 2004 | EP |
1 734 456 | Dec 2006 | EP |
1975849 | Oct 2008 | EP |
2226130 | Jun 1990 | GB |
2309078 | Jul 1997 | GB |
60147602 | Aug 1985 | JP |
H4-34068 | Feb 1992 | JP |
08-313454 | Nov 1996 | JP |
09-178670 | Jul 1997 | JP |
09185710 | Jul 1997 | JP |
9-288060 | Nov 1997 | JP |
11-101689 | Apr 1999 | JP |
2000-84495 | Mar 2000 | JP |
2000-227401 | Aug 2000 | JP |
2000-293694 | Oct 2000 | JP |
2000298103 | Oct 2000 | JP |
2000-322450 | Nov 2000 | JP |
2001-109884 | Apr 2001 | JP |
2001-194323 | Jul 2001 | JP |
2002-148201 | May 2002 | JP |
2002148205 | May 2002 | JP |
2002-214153 | Jul 2002 | JP |
2002318202 | Oct 2002 | JP |
2003270166 | Sep 2003 | JP |
2004-145504 | May 2004 | JP |
WO-96-09597 | Mar 1996 | WO |
WO-0141068 | Jul 2001 | WO |
WO 02015120 | Feb 2002 | WO |
WO 02075637 | Sep 2002 | WO |
WO-03102859 | Dec 2003 | WO |
WO 2005050390 | Jun 2005 | WO |
WO-2005124317 | Jun 2005 | WO |
WO-2005124709 | Dec 2005 | WO |
WO-2005124719 | Dec 2005 | WO |
WO2008118419 | Oct 2008 | WO |
WO20080118425 | Oct 2008 | WO |
Entry |
---|
iQ 180 Products, Adaptive Optics Associates 900Coles Road Blackwood, NJ 08012-4683, (Dec. 2003). |
“Laser Scanning Product Guide”, Adaptive Optics Associates—Industrial Products and Systems 90 Coles Road Blackwood, NJ 08012, Industrial Holographic and Conventional Laser 1D, Omnidirectional Bar Codes Scanners,(Mar. 2003). |
“CV-2100 Series”,Keyence America http://www.keyence.com/products/vision/cv 2100 spec.html. High-Speed Digital Machine Vision System,(Dec. 29, 2003). |
“CCD/CMOS Sensors Spot Niche Application”, PennWell Corporation, Vision System Design—Imaging and Machine Vision Technology,(2004). |
West, Perry C., “High Speed, Real-Time Machine Vision”, Imagenation and Automated Vision Systems, Inc., (2001). |
Asundi, A. et al., “High-Speed TDI Imaging for Peripheral Inspection”, Proc. SPIE vol. 2423, Machine Vision Applications in Industrial Inspection III, Frederick Y. Wu; Stephen S. Wilson; Eds.,(Mar. 1995),189 - 194. |
Baillard, C. et al., “Automatic Reconstruction of Piecewise Planar Models from Multiple Views”, CVPR, vol. 02, No. 2, (1999), 2559. |
Kim, Zuwhan et al., “Automatic Description of Complex Buildings with Multiple Images”, IEEE 0/7695-0813-8/00, (2000),155 - 162. |
Demotte, Donald “Visual Line Tracking”, Application Overview & Issues Machine Vision for Robot Guidance Workshop, (May 5, 2004). |
Siemens AG, “Simatic Machine Vision”, Simatic VS 100 Series, www.siemens.com/machine-vision, (Apr. 1, 2003). |
Stauffer, Chris et al., “Tracking-Based Automatic Object Recognition”, Artificial Intelligence Laboratory, Massachusetts Institute of TechnologyCambridge, MA http://www.ai.mit.edu, (2001),pp. 133-134. |
Baumberg, A. M. et al., “Learning Flexible Models from Image Sequences”, University of Leeds, School of Computer Studies, Research Report Series, Report 93.36, (Oct., 1993), pp. 1-13. |
Rohr, K. “Incremental Recognition of Pedestrians from Image Sequences”, CVPR93(1993). |
Chang, Dingding et al., “Feature Detection of Moving Images using a Hierarchical Relaxation Method”, IEICE Trans. INF. & Syst., vol. E79-D, (Jul. 7, 1996). |
Zarandy, A. et al., “Vision Systems Based on the 128X128 Focal Plane Cellular Visual Microprocessor Chips”, IEEE (Mar. 2003), III-518—III-521. |
“SmartCapture Tool”, Feature Fact Sheet, Visionx Inc., www.visionxinc.com, (2003). |
Wilson, Andrew “CMOS/CCD sensors spot niche applications”, Vision Systems Design, (Jun., 2003). |
“ICS 100— Intelligent Camera Sensor”, SICK Product Information, SICK Industrial Sensors 6900 West 110th Street Minneapolis, MN 55438 www.sickusa.com, (Jan. 3, 2002). |
“Matsushita Imagecheckers”, NAiS Machine Vision—Matsushita Machine Vision Systems, (2003). |
“Matsushita LightPix AE10”, NAiS Machine Vision—Matsushita Machine Vision Systems, (2003). |
Corke, Peter I., et al., “Real Time Industrial Machine Vision”, Electrical Engineering CongressSydney, Australia, CSIRO Division of Manufacturing Technology, (1994). |
Marsh, R et al., “The application of knowledge based vision to closed-loop control of the injection molding process”, SPIE vol. 3164, Faculty of Engineering University of the West of England United Kingdon, (1997), 605-16. |
Zarandy, AKOS et al., “Ultra-High Frame Rate Focal Plane Image Sensor and Processor”, IEEE Sensors Journal, vol. 2, No. 6, (2002). |
“CVL Vision Tools Guide”, Cognex MVS-8000 Series, Chapter 5, Symbol Tool, CVL 5.4, (Dec. 1999). |
“Cognex 4000/5000 SMD Placement Guidance Package, User's Manual”, Release 3.8.00, Chapter 15, (1998). |
“LM9630 100 x 128, 580 fps UltraSensitive Monochrome CMOS Image Sensor”, National Semiconductor Corp., www.national.com.Rev. 1.0, (Jan. 2004). |
“Blackfin Processor Instruction Set Reference”, Analog Devices, Inc., Revision.2.0, Part No. 82-000410-14, (May, 2003). |
“ADSP-BF533 Blackfin Processor Hardware Reference”, Analog Devices Inc.—Media Platforms and Services Group, Preliminary Revision - Part No. 82-002005-01, (Mar., 2003). |
“Cognex 3000/4000/5000 Image Processing”, Revision 7.4 590-0135 Edge Detection Too, (1996). |
National Instruments, “IMAQVision Builder Tutorial”, IMAQ XP-002356530, http://www.ni.com/pdf/manuals/322228c.pdf, (Dec. 2000). |
Denis, Jolivet “LabVIEW and IMAQ Vision Builder Provide Automated Visual Test System”, Semiconductor: IMAQ Advanced Analysis Toolkit, IMAQ Vision Builder—LabVIEW—National Instruments—XP002356529—URL http://www.ni.com/pdf/csma/us/JNDESWG.pdf, (2001). |
Chen, Y. H., “Computer Vision for General Purpose Visual Inspection: a Fuzzy Logic Approach”, Optics and Lasers in Engineering 22, Elsevier Science Limited, vol. 22, No. 3, (1995), pp. 182-192. |
Di Mauro, E. C., et al., “Check— a generic and specific industrial inspection tool”, IEE Proc.-Vis. Image Signal Process., vol. 143, No. 4, (Aug. 27, 1996), pp. 241-249. |
“Cognex VisionPro”, Getting Started—QuickStart Tutorial, Cognex Corporation,590-6560, Revision 3.5, (May, 2004), 69-94. |
UNO, T. et al., “A Method of Real-Time Recognition of Moving Objects and its Application”, Pattern Recognition; Pergamon Press, vol. 8, pp. 201-208, (1976), pp. 201-208. |
Haering, N. et al., “Visual Event Detection”, Kluwer Academic Publishers, Chapter 2, Section 8, (2001). |
IBM, “Software Controls for Automated Inspection Device Used to Check Interposer Buttons for Defects”, IP.com Journal, IP.com Inc., West Henrietts, NY,US. (Mar. 27, 2003). |
“Bi- Bio-inspired Real-Time Very High Speed Image Processing Systems”, AnaLogic Computers Ltd., http://www.analogic-computers.com/cgi-bin/phprint21.php, (2004). |
“Cellular device processes at ultrafast speeds”, VisionSystems Design, (Feb. 2003). |
“Bi-i”, AnaLogic Computers Ltd., (2003). |
Hunt, Shane C., “Mastering Microsoft PhotoDraw 2000”, SYBEX, Inc., San Francisco, (May 21, 1999). |
Kahn, Phillip “Building blocks for computer vision systems”, IEEE Expert, vol. 8, No. 6, XP002480004, (Dec. 6, 1993), pp. 40-50. |
Whelan, P. et al., “Machine Vision Algorithms in Java”, Chapter 1—An Introduction to Machine Vision, SPRINGER-VERLAG, XP002480005, (2001). |
RVSI, “Smart Camera Reader for Directly Marked Data Matrix Codes”, HawkEye 1515 with GUI, (2004). |
Cognex Corporation, “Screen shot of the CheckMate GUI Ver 1.6”, (Jan., 2005). |
Avalon Vision Solutions, “If accuracy matters in your simplest vision applications Use the Validator”, (2006). |
Baumer Optronic, “Technishche Daten”, www.baumeroptronic.com, Product Brochure, (Apr. 2006), 6. |
Olympus Industrial, “High Speed, High Quality Imaging Systems”, i-speed Product Brochure—Publisher Olympus Industrial, (2002). |
Olympus Industrial, “Design Philosophy”, i-speed, (2002). |
Matrox, “Interactive Windows Imaging Software for Industrial and Scientific Applications”, Inspector 4.0—Matrox Imaging, (Apr. 15, 2002), p. 8. |
Cognex Corporation, “Sensorpart FA45 Vision Sensor”, (Sep. 29, 2006). |
IO Industries, “High Speed Digital Video Recording Software 4.0”, IO industries, Inc.—Ontario, CA , (2002). |
Integrated Design Tools, “High-Speed CMOS Digital Camera”, X-Stream Vision Users Manual, (2000). |
Wright, Anne et al., “Cognachrome Vision System Users Guide”, Newton Research Labs, Manual Edition 2.0, Documents Software Version 26.0, (Jun. 3, 1996). |
Stemmer Imaging GmbH, “Going Multimedia with Common Vision Blox”, Product News, www.stemmer-imaging.de, (Mar. 3, 2004). |
Lavision GmbH, “High Speed CCD/CMOS Camera Systems”, Overview of state-of-the-art High Speed Digital Camera Systems—UltraSpeedStar, www.lavision.de, (Sep. 24, 2004). |
Cordin Company, “Electronic Imaging Systems”, High Speed Imaging.Solutions: 200-500 Series Cameras, (Aug. 4, 2004). |
Photron USA, Product information for “Ultima APX”, www.photron.com (Sep. 24, 2004). |
Photron USA, Product information for “FASTCAM PCI”, www.photron.com (Sep. 24, 2004). |
Photron USA, Product information for “Ultima 512”, www.photron.com (Sep. 24, 2004). |
Photron USA, Product information for “Ultima 1024”, www.photron.com (Sep. 24, 2004). |
Photron USA, Product information for “FASTCAM-X 1280 PCI”, www.photron.com (Sep. 24, 2004). |
Apple Computer Inc., Studio Display Users Manual online, retrieved on Nov. 24, 2010, retrieved from the Internet http://manuals.info.apple.com/en/studioDisplay—15inLCDUserManual.pdf, (1998). |
Cognex Corporation, “VisionPro Getting Started”, Revision 3.2, Chapter 5, 590-6508, copyright 2003. |
Allen-Bradley, “Bulletin 2803 VIM Vision Input Module”, Cat. No. 2803-VIM2, Printed USA, (1991. Submitted in 3 parts). |
Allen-Bradley, “Users Manual”, Bulletin 2803 VIM Vision Input Module, Cat. No. 2803-VIM1, (1987. Submitted in 2 parts). |
Allen-Bradley, “Bulletin 5370 CVIM Configurable Vision Input Module”, User Manual Cat. No. 5370-CVIM, (1995. Submitted in 3 parts). |
Cognex Corporation, “3000/4000/5000 Vision Tools”, revision 7.6, 590-0136, Chapter 13, (1996). |
Cognex Corporation, “Cognex 3000/4000/5000”, Vision Tools, Revision 7.6, 590-0136, Chapter 10 Auto-Focus, (1996). |
KSV Instruments Ltd., HiSIS 2002—High Speed Imaging System, copyright 1996-1998, http://www.changkyung.co.kr/ksv/hisis/highsp.htm. |
Vietze, Oliver “Miniaturized Vision Sensors for Process Automation”, (Jan. 2, 2005). |
Number | Date | Country | |
---|---|---|---|
20130163847 A1 | Jun 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11136103 | May 2005 | US |
Child | 13623387 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10865155 | Jun 2004 | US |
Child | 11136103 | US |