This invention relates to vision systems, and more particularly to Graphical User Interface (GUI) elements for monitoring and controlling such vision systems.
Industrial manufacturing relies on automatic inspection of objects being manufactured. One form of automatic inspection that has been in common use for decades is based on optoelectronic technologies that use electromagnetic energy, usually infrared or visible light, photoelectric sensors, and some form of electronic decision making.
One well-known form of optoelectronic automatic inspection uses an arrangement of photodetectors. A typical photodetector has a light source and a single photoelectric sensor that responds to the intensity of light that is reflected by a point on the surface of an object, or transmitted along a path that an object may cross. A user-adjustable sensitivity threshold establishes a light intensity above which (or below which) an output signal of the photodetector will be energized.
One photodetector, often called a gate, is used to detect the presence of an object to be inspected. Other photodetectors are arranged relative to the gate to sense the light reflected by appropriate points on the object. By suitable adjustment of the sensitivity thresholds, these other photodetectors can detect whether certain features of the object, such as a label or hole, are present or absent. A decision as to the status of the object (for example, pass or fail) is made using the output signals of these other photodetectors at the time when an object is detected by the gate. This decision is typically made by a programmable logic controller (PLC), or other suitable electronic equipment.
Automatic inspection using photodetectors has various advantages. Photodetectors are inexpensive, simple to set up, and operate at very high speed (outputs respond within a few hundred microseconds of the object being detected, although a PLC will take longer to make a decision).
Automatic inspection using photodetectors has various disadvantages, however, including:
Another well-known form of optoelectronic automatic inspection uses a device that can capture a digital image of a two-dimensional field of view (FOV) in which an object to be inspected is located, and then analyze the image and make decisions. Such a device is usually called a machine vision system, or simply a vision system. The image is captured by exposing a two-dimensional array of photosensitive elements for a brief period, called the integration or shutter time, to light that has been focused on the array by a lens. The array is called an imager and the individual elements are called pixels. Each pixel measures the intensity of light falling on it during the shutter time. The measured intensity values are then converted to digital numbers and stored in the memory of the vision system to form the image, which is analyzed by a digital processing element such as a computer, using methods well-known in the art to determine the status of the object being inspected.
In some cases the objects are brought to rest in the field of view, and in other cases the objects are in continuous motion through the field of view. An event external to the vision system, such as a signal from a photodetector, or a message from a PLC, computer, or other piece of automation equipment, is used to inform the vision system that an object is located in the field of view, and therefore an image should be captured and analyzed. Such an event is called a trigger.
Machine vision systems avoid the disadvantages associated with using an arrangement of photodetectors. They can analyze patterns of brightness reflected from extended areas, easily handle many distinct features on the object, accommodate line changeovers through software systems and/or processes, and handle uncertain and variable object locations.
Machine vision systems have disadvantages compared to an arrangement of photodetectors, including:
Machine vision systems have limitations that arise because they make decisions based on a single image of each object, located in a single position in the field of view (each object may be located in a different and unpredictable position, but for each object there is only one such position on which a decision is based). This single position provides information from a single viewing perspective, and a single orientation relative to the illumination. The use of only a single perspective often leads to incorrect decisions. It has long been observed, for example, that a change in perspective of as little as a single pixel can in some cases change an incorrect decision to a correct one. By contrast, a human inspecting an object usually moves it around relative to his eyes and the lights to make a more reliable decision.
Also, the limitations of machine vision systems arise in part because they operate too slowly to capture and analyze multiple perspectives of objects in motion, and too slowly to react to events happening in the field of view. Since most vision systems can capture a new image simultaneously with analysis of the current image, the maximum rate at which a vision system can operate is determined by the larger of the capture time and the analysis time. Overall, one of the most significant factors in determining this rate is the number of pixels comprising the imager.
The availability of new low-cost imagers, such as the LM9630 from National Semiconductor of Santa Clara, Calif. that operate at a relatively low-resolution (approximately 100×128 pixels), high frame rate (up to 500 frames per second) and high sensitivity allowing short shutter times with inexpensive illumination (e.g., 300 microseconds with LED illumination), have made possible the implementation of a novel vision detector that employs on-board processors to control machine vision detection and analysis functions. A novel vision detector using such an imager, and overall inspection system employing such a vision detector, is taught in co-pending and commonly assigned U.S. patent application Ser. No. 10/865,155, published as U.S. Publication No. US2005-0275831 on Dec. 15, 2005, entitled METHOD AND APPARATUS FOR VISUAL DETECTION AND INSPECTION OF OBJECTS, by William M. Silver, filed Jun. 9, 2004, and the teachings of which are expressly incorporated herein by reference (herein also termed “above-incorporated-by-reference METHOD AND APPARATUS).
An advantage to the above-incorporated-by-reference detection and inspection METHOD AND APPARATUS is that the vision detector can be implemented within a compact housing that is programmed using a PC or other Human-Machine Interface (HMI) device (via, for example a Universal Serial Bus (USB)), and is then deployed to a production line location for normal runtime operation. The outputs of the apparatus are (in one implementation) a pair of basic High/Low lines indicating detection of the object and whether that object passes or fails based upon the characteristics being analyzed. These outputs can be used (for example) to reject a failed object using a rejection arm mounted along the line that is signaled by the apparatus' output.
By way of example,
In an alternate example, the vision detector 100 sends signals to a PLC for various purposes, which may include controlling a reject actuator. In another exemplary implementation, suitable in extremely high-speed applications or where the vision detector cannot reliably detect the presence of an object, a photodetector is used to detect the presence of an object and sends a signal to the vision detector for that purpose. In yet another implementation, there are no discrete objects, but rather material flows past the vision detector continuously—for example a web. In this case the material is inspected continuously, and signals are sent by the vision detector to automation equipment, such as a PLC, as appropriate.
Basic to the function of the vision detector 100 in the above-incorporated-by-reference METHOD AND APPARATUS is the ability to exploit the abilities of the imager's quick-frame-rate and low-resolution image capture to allow a large number of image frames of an object passing down the line to be captured and analyzed in real-time. Using these frames, the apparatus' on-board processor can decide when the object is present and use location information to analyze designated areas of interest on the object that must be present in a desired pattern for the object to “pass” inspection.
With brief reference to
Boxes labeled “c”, such as box 220, represent image capture by the vision detector 100. Boxes labeled “a”, such as box 230, represent image analysis. It is desirable that capture “c” of the next image be overlapped with analysis “a” of the current image, so that (for example) analysis step 230 analyzes the image captured in capture step 220. In this timeline, analysis is shown as taking less time than capture, but in general analysis will be shorter or longer than capture depending on the application details. If capture and analysis are overlapped, the rate at which a vision detector can capture and analyze images is determined by the longer of the capture time and the analysis time. This is the “frame rate”. The above-incorporated-by-reference METHOD AND APPARATUS allows objects to be detected reliably without a trigger signal, such as that provided by a photodetector.
Each analysis step “a” first considers the evidence that an object is present. Frames where the evidence is sufficient are called active. Analysis steps for active frames are shown with a thick border, for example analysis step 240. In an exemplary implementation, inspection of an object begins when an active frame is found, and ends when some number of consecutive inactive frames are found. In the example of
At the time that inspection of an object is complete, for example at the end of analysis step 248, decisions are made on the status of the object based on the evidence obtained from the active frames. In an exemplary implementation, if an insufficient number of active frames were found then there is considered to be insufficient evidence that an object was actually present, and so operation continues as if no active frames were found. Otherwise an object is judged to have been detected, and evidence from the active frames is judged in order to determine its status, for example pass or fail. A variety of methods may be used to detect objects and determine status within the scope of this example; some are described below and many others will occur to those skilled in the art. Once an object has been detected and a judgment made, a report may be made to appropriate automation equipment, such as a PLC, using signals well-known in the art. In such a case a report step would appear in the timeline. The example of
Note in particular that the report 260 may be delayed well beyond the inspection of subsequent objects such as object 110 (
Once inspection of an object is complete, the vision detector 100 may enter an idle step 280. Such a step is optional, but may be desirable for several reasons. If the maximum object rate is known, there is no need to be looking for an object until just before a new one is due. An idle step will eliminate the chance of false object detection at times when an object couldn't arrive, and will extend the lifetime of the illumination system because the lights can be kept off during the idle step.
The processor of the exemplary METHOD AND APPARATUS is provided with two types of software elements to use in making its decisions: “Locators” that locate the object and “Detectors” that decide whether an object feature is present or absent. The decisions made by both Locators and Detectors are used to judge whether an object is detected and, if so, whether it passes inspection. In one example, Locators can be simply described as a one-dimensional edge detector in a region of interest. The vision detector is configured for locating objects by placing Locators at certain positions in an image where an edge feature of the object can be seen when the object is in the field of view. The Locator can be oriented with respect to the direction the object is moving, and sized to ensure that the edge feature of the object can be located at multiple positions while in the field of view. During analysis, the location of the edge feature of the object within the Locator can be reported, as well as a logical output state that the location is known.
Detectors are vision tools that operate on a region of interest that produce a logical output state that detects the presence or absence of features in an image of the object. The vision detector is configured for detecting features of an object by placing Detectors at certain positions in an image where object features can be seen when the object is located by the Locators. Various types of Detectors can be used, such as Brightness Detectors, Edge Detectors, and Contrast Detectors.
Detectors can be linked to the location of the feature determined by a Locator to further refine the presence detection and inspection of the object. Accordingly, in each frame where the object may be viewed at a different perspective, the location of the object determined by the Locator will be different, and the position of the Detectors in the image can be moved according to the location determined by the Locator. The operation of the vision detector at high frame rates, therefore permits the vision detector to capture and analyze multiple images of the object while it passes through the field of view.
The above-discussion of Locators and Detectors is further illustrated by way of example in
The Locator 320 is used to detect and locate the top edge of the object, and the Locator 322 is used to detect and locate the right edge. A Brightness Detector 330 is used to help detect the presence of the object. In this example the background is brighter than the object, and the sensitivity threshold is set to distinguish the two brightness levels, with the logic output inverted to detect the darker object and not the brighter background. Together the Locators 320 and 322, and the Brightness Detector 330, provide the evidence needed to judge that an object has been detected, as further described below. A Contrast Detector 340 is used to detect the presence of the hole 312. When the hole 312 is absent the contrast would be very low, and when present the contrast would be much higher. A Spot Detector could also be used. An Edge Detector 360 is used to detect the presence and position of the label 310. If the label 310 is absent, mis-positioned horizontally, or significantly rotated, the analog output of the Edge Detector would be very low. A Brightness Detector 350 is used to verify that the correct label has been applied. In this example, the correct label is white and incorrect labels are darker colors.
As the object (110 in
The choice of Gadgets to wire to ObjectDetect is made by a user based on knowledge of the application. In the example of
The logic output of ObjectDetect Judge 400 is wired to AND Gate 470. The logic output of ObjectPass Judge 402 is inverted (circle 403) and also wired to AND Gate 470. The ObjectDetect Judge is set to “output when done” mode, so a pulse appears on the logic output of ObjectDetect Judge 400 after an object has been detected and inspection is complete. Since the logic output of ObjectPass 402 has been inverted, this pulse will appear on the logic output of AND Gate 470 only if the object has not passed inspection. The logic output of AND Gate 470 is wired to an Output Gadget 480, named “Reject”, which controls an output signal from the vision detector than can be connected directly to a reject actuator 170 (
To aid the user's understanding of the operation of the exemplary vision detector 100, Gadgets and/or wires can change their visual appearance to indicate fuzzy logic values. For example, Gadgets and/or wires can be displayed red when the logic value is below 0.5, and green otherwise. In
The HMI GUI screen 196 used to assist in setup and testing of the vision detector allows for many convenient functions of the vision detector 100 to be manipulated by user with relative ease owing to the highly visual nature of the GUI. In implementing either the above-described vision detector, a more-complex machine vision system, or any other system that requires setup based upon image analysis, it is desirable to make the setup as uncomplicated and user-friendly as possible. Accordingly, use of wide range of features inherent in a GUI is highly desirable. In particular, techniques that allow easier, and more intuitive, manipulation and monitoring of the particular system's operating parameters, such as the relevant values for threshold, operating range and sensitivity for Locators and Detectors, are quite desirable.
This invention provides a system and method for employing GUI-based non-numeric slide buttons and bar meters to setup and monitor operating parameters of a vision system (the term “vision system” as used herein including the above-described vision detector). Such parameters can include, but are not limited to the threshold at which a feature is activated in viewing an image. Operating parameters also include the underlying range of contrast values and levels of brightness intensities (or by input inversion, the level of darkness) recognized and acted upon by the vision system.
In a broadly defined illustrative embodiment the display and control vision system operating parameters employs a graphical user interface (GUI) that displays an image view of a field of view of a vision system imager The GUI includes a location thereon that displays at least one operating parameter in a “non-numeric” graphical format, which includes a scale having a predetermined area and a moving region that has a distinct appearance with respect to the scale wherein an edge of the moving region defines a level of the operating parameter. A user-movable setting graphic is located with respect to the area of the scale. The setting graphic selectively moves along the scale and provides a boundary through which the moving region crosses when it attains a desired (user-set or machine-set) level of the operating parameter. The vision system responds in a predetermined manner based upon attainment of this boundary-defined level. The graphical format is “non-numeric” in that the representations of the scale, distinct moving region and setting graphic free of entry of numeric data with respect to the operating parameter.
In one embodiment, graphical representations of operating parameters are displayed in a parameter box on the GUI with moving bars that are shaded, patterned or colored so as to provide a relative level between two extremes on a scale of the given operating parameter. The endpoints of the scale can be established by analyzing the relevant extremes on a subject image view. The current level of the given parameter is displayed as a bar that extends a distance along the scale that is proportional to the current level of the parameter along the scale. Input of operating parameter settings with respect to the scale is made by moving a setting slider along the scale between the extremes. The position of the slider establishes the user-input setting relative to the scale. In an illustrative embodiment, scales, level bars and setting sliders can also be displayed on the image view itself, adjacent to a given image view feature, which is the subject of the scale.
In one embodiment, the operating parameters can include activation threshold, brightness and contrast. The non-numeric graphical representations of current readings of these operating parameters by the vision system and sliders for user-setting levels of these operating parameters are displayed selectively in association with discrete graphic features or tools on the image view, such at Locators and Detectors. The graphical representations can be provided in a dialog box that is accessed for the discrete graphic features or tools on the image view and/or can be displayed on the image view itself in a location adjacent to (or otherwise associated with) the discrete graphic features or tools on the image view.
The invention description below refers to the accompanying drawings, of which:
The vision detector 500 of this exemplary embodiment functions generally in accordance with principles described in the above-incorporated-by-reference METHOD AND APPARATUS FOR VISUAL DETECTION AND INSPECTION OF OBJECTS, by William M. Silver, and summarized above in connection with the exemplary vision detector 100 (
The DSP 600 can be any device capable of digital computation, information storage, and interface to other digital elements, including but not limited to a general-purpose computer, a PLC, or a microprocessor. It is desirable that the DSP 600 be inexpensive but fast enough to handle a high frame rate. It is further desirable that it be capable of receiving and storing pixel data from the imager simultaneously with image analysis.
In the illustrative embodiment of
The high frame rate desired by a vision detector suggests the use of an imager unlike those that have been used in prior art vision systems. It is desirable that the imager be unusually light-sensitive, so that it can operate with extremely short shutter times using inexpensive illumination. It is further desirable that it be able to digitize and transmit pixel data to the DSP far faster than prior art vision systems. It is moreover desirable that it be inexpensive and have a global shutter.
These objectives may be met by choosing an imager with much higher light sensitivity and lower resolution than those used by prior art vision systems. In the illustrative embodiment of
It is desirable that the illumination 640 be inexpensive and yet bright enough to allow short shutter times. In an illustrative embodiment, a bank of high-intensity red LEDs operating at 630 nanometers is used, for example the HLMP-ED25 manufactured by Agilent Technologies. In another embodiment, high-intensity white LEDs are used to implement desired illumination.
In the illustrative embodiment of
As used herein an “image capture device” provides means to capture and store a digital image. In the illustrative embodiment of
A variety of engineering tradeoffs can be made to provide efficient operation of an apparatus according to the present invention for a specific application. Consider the following definitions:
b fraction of the field of view (FOV) occupied by the portion of the object that contains the visible features to be inspected, determined by choosing the optical magnification of the lens 650 so as to achieve good use of the available resolution of imager 660;
e fraction of the FOV to be used as a margin of error;
n desired minimum number of frames in which each object will typically be seen;
s spacing between objects as a multiple of the FOV, generally determined by manufacturing conditions;
p object presentation rate, generally determined by manufacturing conditions;
m maximum fraction of the FOV that the object will move between successive frames, chosen based on above values; and
r minimum frame rate, chosen based on above values.
From these definitions it can be seen that
To achieve good use of the available resolution of the imager, it is desirable that b is at least 50%. For dynamic image analysis, n should be at least 2. Therefore, it is further desirable that the object moves no more than about one-quarter of the field of view between successive frames.
In an illustrative embodiment, reasonable values might be b=75%, e=5%, and n=4. This implies that m≦5%, i.e. that one would choose a frame rate so that an object would move no more than about 5% of the FOV between frames. If manufacturing conditions were such that s=2, then the frame rate r would need to be at least approximately 40 times the object presentation rate p. To handle an object presentation rate of 5 Hz, which is fairly typical of industrial manufacturing, the desired frame rate would be at least around 200 Hz. This rate could be achieved using an LM9630 with at most a 3.3-millisecond shutter time, as long as the image analysis is arranged so as to fit within the 5-millisecond frame period. Using available technology, it would be feasible to achieve this rate using an imager containing up to about 40,000 pixels.
With the same illustrative embodiment and a higher object presentation rate of 12.5 Hz, the desired frame rate would be at least approximately 500 Hz. An LM9630 could handle this rate by using at most a 300-microsecond shutter. In another illustrative embodiment, one might choose b=75%, e=15%, and n=5, so that m≦2%. With s=2 and p=5 Hz, the desired frame rate would again be at least approximately 500 Hz.
Having described the general architecture and operation of an exemplary vision system (vision Detector 500) that may support an HMI in accordance with an embodiment of this invention vision, reference is now made to
In this embodiment, the GUI 700 is provided as part of a programming application running on the HMI and receiving interface information from the vision detector. In the illustrative embodiment, a .NET framework, available From Microsoft Corporation of Redmond, Wash., is employed on the HMI to generate GUI screens. Appropriate formatted data is transferred over the link between the vision detector and HMI to create screen displays and populate screen data boxes, and transmit back selections made by the user on the GUI. Techniques for creating appropriate screens and transferring data between the HMI and vision detector's HMI interface should be clear to those of ordinary skill and are described in further detail below.
The screen 700 includes a status pane 702 in a column along the left side. This pane controls a current status box 704, the dialogs for controlling general setup 706, setup of object detection with Locators and Detectors 708, object inspection tool setup 710 and runtime/test controls 712. The screen 700 also includes a right-side column having a pane 720 with help buttons.
The lower center of the screen 700 contains a current selection control box 730. The title 732 of the box 730 relates to the selections in the status pane 702. In this example, the user has clicked select job 734 in the general setup box 706. Note, the general setup box also allows access to an item (736) for accessing a control box (not shown) that enables setup of the imager (also termed “camera”), which includes, entry of production line speed to determine shutter time and gain. In addition, the general setup box allows the user to set up a part trigger (item 738) via another control box (not shown). This may be an external trigger upon which the imager begins active capture and analysis of a moving object, or it may be an “internal” trigger in which the presence of a part is recognized due to analysis of a certain number of captured image frames (as a plurality of complete object image frames are captured within the imager's field of view).
The illustrated select job control box 730 allows the user to select from a menu 740 of job choices. In general, a job is either stored on an appropriate memory (PC or vision detector or is created as a new job. Once the user has selected either a stored job or a new job, the next button accesses a further screen with a Next button 742. These further control boxes can, by default, be the camera setup and trigger setup boxes described above.
Central to the screen 700 is the image view display 750, which is provided above the control box 730 and between the columns 702 and 720 (being similar to image view window 198 in
As shown in
Before describing further the procedure for manipulating and using the GUI and various non-numeric elements according to this invention, reference is made briefly to the bottommost window 770 which includes a line of miniaturized image frames that comprise a so-called “film strip” of the current grouping of stored, captured image frames 772. These frames 772 each vary slightly in bottle position with respect to the FOV, as a result of the relative motion. The film strip is controlled by a control box 774 at the bottom of the left column.
Reference is now made to
In this example, when the user “clicks” on the cursor placement, the screen presents the control box 810, which now displays an operating parameter box 812. This operating parameter box 812 displays a single non-numeric parameter bar element 814 that reports threshold for the given Locator. Referring to
While the scale 910, shown in
The scale 910 of
Note that the term “non-numeric” as used herein means that graphically displayed operating parameter level data and controlling graphics can be used to monitor operating parameters of the vision system and to control parameter levels without resort to the entry of numerical data either by direct keying of numbers or other numeric data entry techniques. Rather, the distinctly appearing level bar or region interacts directly with the movable setting graphic (slider) to display and control a given parameter level, and both the level region and setting graphic are similarly proportional to the overall scale.
As also shown in
By way of example,
The threshold bar element, and other non-numeric graphical representations of vision system operating parameters herein, are generated by graphic applications that are framed on the HMI. These HMI graphic applications are responsive to numerical values that are typically generated within the processor of the vision system (processor 600 of vision detector 500 in this example). The numerical values in the processor may represent a variety of particular data points (such as pixel intensity, gradient, etc.). These specific data values are appropriately scaled into standardized values that the GUI application expects to receive, and that allow the GUI to readily generate a standard graphical scale therefrom. Similarly GUI transmits back a standard scale of values that the vision system identifies (through handshake, headers or other identifying information) as related to a particular type of vision system data. The vision system thus translates the information into an appropriate set of data that can be meaningfully used by the vision system processor.
Reference is now made to a further example in
As shown in
Note that a threshold bar element 1360 is automatically appended to the Detector circle 1340 by the GUI. This allows the user to ascertain the current settings and readings of the particular detector. In this example, the current threshold for the associated Detector 1340 is shown as a shaded, colored or patterned threshold bar 1362 extends along the scale 1366 the current position of the threshold setting slider 1364. These indications correspond with those of the threshold bar 1332 (and threshold setting slider 1333) in the operating parameter box 1330. In this example the threshold has been attained as indicated by the indicator 1380. The representation of threshold data on the image view is particularly helpful where a plurality of detectors are present on the image view, and only one Detector's status is currently shown (typically the last Detector clicked) in the operating parameter box 1330. Note that by clicking any Detector or Locator in the image view, the relevant control box and associated parameter box is retrieved and displayed in the parameter box while the levels of other detectors are continuously displayed on the image view.
Referring to
Referring to
The contrast parameter box 1510 of
Finally,
Hence, the above description provides useful and highly flexible mechanisms for allowing minimally trained persons to quickly employ a vision detector without the need of intensive human programming or labor in setup. While the example of a setup procedure is described above, the non-numeric elements of this invention are displayed and manipulable during a runtime monitoring and testing phase both on the runtime (or played-back, stored) image view and in the control box of the GUI in association with a selected (clicked) image view graphic features or tools (a Detector or Locator in this example).
While the non-numeric graphic elements of this invention are described in association with exemplary image detector tools, such elements can be applied to any vision system that requires monitoring of thresholds and settings, and entry of data that falls within a relative range or scale of values.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope thereof. For example, while ROIs for Locators are shown as rectangles and Detectors are shown as circles, their ROIs may each define a different shape or a variety of selectable and/or customized shapes as needed. Likewise, while a particular form of HMI and GUI are shown, a variety of hardware and GUI expressions are expressly contemplated. For example, in alternate embodiments access to operating parameters may be through alternate display screens or boxes. While level bars and sliders are used for graphic representations of data in this description, it is expressly contemplated that such non-numeric graphical representations can be defined by any graphical character or layout on a GUI that shows levels and allows entry of data by altering a filled area of an overall area, representative of an absolute scale between two range extremes. Finally, while the graphic elements shown and described herein are termed “non-numeric,” it is expressly contemplated that numeric gradations or labels can be applied to the graphic bars, sliders and other representations as appropriate to assist the user in understanding relative (for example percentages) and actual (for, example pixel intensity) levels for various operating parameters. In general, however, the viewing and setting of levels is accomplished with area-filling (level bars, for example) and movement-based (sliders for example) graphics. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of the invention.
The present application is a continuation of commonly assigned U.S. patent application Ser. No. 10/988,120, which was filed on Nov. 12, 2004, published as U.S. Publication No. US2010-0198375 on Aug. 5, 2010, now U.S. Pat. No. 7,720,315, by Brian Mirtich for a SYSTEM AND METHOD FOR DISPLAYING AND USING NON-NUMERIC GRAPHIC ELEMENTS TO CONTROL AND MONITOR A VISION SYSTEM and is hereby incorporated by reference. This application is related to co-pending and commonly assigned U.S. patent application Ser. No. 10/865,155, published as U.S. Publication No. US2005-0275831 on Dec. 15, 2005, entitled METHOD AND APPARATUS FOR VISUAL DETECTION AND INSPECTION OF OBJECTS, by William M. Silver, filed Jun. 9, 2004, the teachings of which are expressly incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4214265 | Olesen | Jul 1980 | A |
4292666 | Hill et al. | Sep 1981 | A |
4384195 | Nosler | May 1983 | A |
4647979 | Urata | Mar 1987 | A |
4847772 | Michalopoulos et al. | Jul 1989 | A |
4916640 | Gasperi | Apr 1990 | A |
4962538 | Eppler et al. | Oct 1990 | A |
4972494 | White et al. | Nov 1990 | A |
5018213 | Sikes | May 1991 | A |
5040056 | Sager et al. | Aug 1991 | A |
5121201 | Seki | Jun 1992 | A |
5146510 | Cox et al. | Sep 1992 | A |
5164998 | Reinsch | Nov 1992 | A |
5177420 | Wada | Jan 1993 | A |
5184217 | Doering | Feb 1993 | A |
5198650 | Wike et al. | Mar 1993 | A |
5210798 | Ekchian et al. | May 1993 | A |
5233541 | Corwin et al. | Aug 1993 | A |
5262626 | Goren et al. | Nov 1993 | A |
5286960 | Longacre, Jr. et al. | Feb 1994 | A |
5298697 | Suzuki et al. | Mar 1994 | A |
5317645 | Perozek et al. | May 1994 | A |
5345515 | Nishi et al. | Sep 1994 | A |
5365596 | Dante et al. | Nov 1994 | A |
5420409 | Longacre, Jr. et al. | May 1995 | A |
5476010 | Fleming et al. | Dec 1995 | A |
5481712 | Silver et al. | Jan 1996 | A |
5581625 | Connell et al. | Dec 1996 | A |
5687249 | Kato | Nov 1997 | A |
5717834 | Werblin et al. | Feb 1998 | A |
5734742 | Asaeda | Mar 1998 | A |
5742037 | Scola et al. | Apr 1998 | A |
5751831 | Ono | May 1998 | A |
5802220 | Black et al. | Sep 1998 | A |
5809161 | Auty et al. | Sep 1998 | A |
5825483 | Michael et al. | Oct 1998 | A |
5852669 | Eleftheriadis et al. | Dec 1998 | A |
5872354 | Hanson | Feb 1999 | A |
5917602 | Bonewitz et al. | Jun 1999 | A |
5929418 | Ehrhart et al. | Jul 1999 | A |
5932862 | Hussey et al. | Aug 1999 | A |
5937096 | Kawai | Aug 1999 | A |
5942741 | Longacre et al. | Aug 1999 | A |
5943432 | Gilmore et al. | Aug 1999 | A |
5960097 | Pfeiffer et al. | Sep 1999 | A |
5960125 | Michael et al. | Sep 1999 | A |
5966457 | Lemelson | Oct 1999 | A |
6046764 | Kirby et al. | Apr 2000 | A |
6049619 | Anandan et al. | Apr 2000 | A |
6061471 | Coleman et al. | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6072882 | White et al. | Jun 2000 | A |
6075882 | Mullins et al. | Jun 2000 | A |
6078251 | Landt et al. | Jun 2000 | A |
6088467 | Sarpeshkar et al. | Jul 2000 | A |
6115480 | Washizawa | Sep 2000 | A |
6158661 | Chadima, Jr. et al. | Dec 2000 | A |
6160494 | Sodi et al. | Dec 2000 | A |
6161760 | Marrs | Dec 2000 | A |
6169535 | Lee | Jan 2001 | B1 |
6169600 | Ludlow | Jan 2001 | B1 |
6173070 | Michael et al. | Jan 2001 | B1 |
6175644 | Scola et al. | Jan 2001 | B1 |
6175652 | Jacobson et al. | Jan 2001 | B1 |
6184924 | Schneider et al. | Feb 2001 | B1 |
6215892 | Douglass et al. | Apr 2001 | B1 |
6282462 | Hopkins | Aug 2001 | B1 |
6285787 | Kawachi et al. | Sep 2001 | B1 |
6298176 | Longacre, Jr. et al. | Oct 2001 | B2 |
6301610 | Ramser et al. | Oct 2001 | B1 |
6333993 | Sakamoto | Dec 2001 | B1 |
6346966 | Toh | Feb 2002 | B1 |
6347762 | Sims et al. | Feb 2002 | B1 |
6360003 | Doi et al. | Mar 2002 | B1 |
6396517 | Beck et al. | May 2002 | B1 |
6396949 | Nichani | May 2002 | B1 |
6408429 | Marrion et al. | Jun 2002 | B1 |
6446868 | Robertson et al. | Sep 2002 | B1 |
6483935 | Rostami et al. | Nov 2002 | B1 |
6487304 | Szeliski et al. | Nov 2002 | B1 |
6525810 | Kipman | Feb 2003 | B1 |
6526156 | Black et al. | Feb 2003 | B1 |
6539107 | Michael et al. | Mar 2003 | B1 |
6542692 | Houskeeper | Apr 2003 | B1 |
6545705 | Sigel et al. | Apr 2003 | B1 |
6549647 | Skunes et al. | Apr 2003 | B1 |
6573929 | Glier et al. | Jun 2003 | B1 |
6580810 | Yang et al. | Jun 2003 | B1 |
6587122 | King et al. | Jul 2003 | B1 |
6597381 | Eskridge et al. | Jul 2003 | B1 |
6608930 | Agnihotri et al. | Aug 2003 | B1 |
6618074 | Seeley et al. | Sep 2003 | B1 |
6621571 | Maeda et al. | Sep 2003 | B1 |
6628805 | Hansen et al. | Sep 2003 | B1 |
6629642 | Swartz et al. | Oct 2003 | B1 |
6646244 | Aas et al. | Nov 2003 | B2 |
6668075 | Nakamura et al. | Dec 2003 | B1 |
6677852 | Landt | Jan 2004 | B1 |
6681151 | Weinzimmer et al. | Jan 2004 | B1 |
6741977 | Nagaya et al. | May 2004 | B1 |
6753876 | Brooksby et al. | Jun 2004 | B2 |
6761316 | Bridgelall | Jul 2004 | B2 |
6774917 | Foote et al. | Aug 2004 | B1 |
6816063 | Kubler | Nov 2004 | B2 |
6817982 | Fritz et al. | Nov 2004 | B2 |
6825856 | Fazzio et al. | Nov 2004 | B1 |
6891570 | Tantalo et al. | May 2005 | B2 |
6919793 | Heinrich | Jul 2005 | B2 |
6944584 | Tenney et al. | Sep 2005 | B1 |
6973209 | Tanaka | Dec 2005 | B2 |
6985827 | Williams et al. | Jan 2006 | B2 |
6987528 | Nagahisa et al. | Jan 2006 | B1 |
6997556 | Pfleger | Feb 2006 | B2 |
6999625 | Nelson et al. | Feb 2006 | B1 |
7062071 | Tsujino et al. | Jun 2006 | B2 |
7066388 | He | Jun 2006 | B2 |
7070099 | Patel | Jul 2006 | B2 |
7085401 | Averbuch et al. | Aug 2006 | B2 |
7088387 | Freeman et al. | Aug 2006 | B1 |
7088846 | Han et al. | Aug 2006 | B2 |
7097102 | Patel et al. | Aug 2006 | B2 |
7175090 | Nadabar | Feb 2007 | B2 |
7181066 | Wagman | Feb 2007 | B1 |
7227978 | Komatsuzaki et al. | Jun 2007 | B2 |
7266768 | Ferlitsch et al. | Sep 2007 | B2 |
7274808 | Baharav et al. | Sep 2007 | B2 |
7280685 | Beardsley | Oct 2007 | B2 |
7604174 | Gerst et al. | Oct 2009 | B2 |
7657081 | Blais et al. | Feb 2010 | B2 |
20010042789 | Krichever et al. | Nov 2001 | A1 |
20020005895 | Freeman et al. | Jan 2002 | A1 |
20020037770 | Paul et al. | Mar 2002 | A1 |
20020099455 | Ward | Jul 2002 | A1 |
20020122582 | Masuda et al. | Sep 2002 | A1 |
20020177918 | Pierel et al. | Nov 2002 | A1 |
20020181405 | Ying | Dec 2002 | A1 |
20020196336 | Batson et al. | Dec 2002 | A1 |
20020196342 | Walker et al. | Dec 2002 | A1 |
20030062418 | Barber et al. | Apr 2003 | A1 |
20030095710 | Tessadro | May 2003 | A1 |
20030113018 | Nefian et al. | Jun 2003 | A1 |
20030120714 | Wolff et al. | Jun 2003 | A1 |
20030137590 | Barnes et al. | Jul 2003 | A1 |
20030201328 | Jam et al. | Oct 2003 | A1 |
20030219146 | Jepson et al. | Nov 2003 | A1 |
20030227483 | Schultz et al. | Dec 2003 | A1 |
20040122306 | Spoonhower et al. | Jun 2004 | A1 |
20040148057 | Breed et al. | Jul 2004 | A1 |
20040218806 | Miyamoto et al. | Nov 2004 | A1 |
20050173633 | Tanaka et al. | Aug 2005 | A1 |
20050226490 | Phillips et al. | Oct 2005 | A1 |
20050254106 | Silverbrook et al. | Nov 2005 | A9 |
20050257646 | Yeager | Nov 2005 | A1 |
20050275728 | Mirtich et al. | Dec 2005 | A1 |
20050275831 | Silver | Dec 2005 | A1 |
20050275833 | Silver | Dec 2005 | A1 |
20050275834 | Silver | Dec 2005 | A1 |
20050276445 | Silver et al. | Dec 2005 | A1 |
20050276459 | Eames et al. | Dec 2005 | A1 |
20050276460 | Silver et al. | Dec 2005 | A1 |
20050276461 | Silver et al. | Dec 2005 | A1 |
20050276462 | Silver et al. | Dec 2005 | A1 |
20060022052 | Patel et al. | Feb 2006 | A1 |
20060107211 | Mirtich et al. | May 2006 | A1 |
20060107223 | Mirtich et al. | May 2006 | A1 |
20060131419 | Nunnink | Jun 2006 | A1 |
20060133757 | Nunnink | Jun 2006 | A1 |
20060146337 | Hartog | Jul 2006 | A1 |
20060146377 | Marshall et al. | Jul 2006 | A1 |
20060223628 | Walker et al. | Oct 2006 | A1 |
20060249581 | Smith et al. | Nov 2006 | A1 |
20060283952 | Wang | Dec 2006 | A1 |
20070009152 | Kanda | Jan 2007 | A1 |
20070146491 | Tremblay et al. | Jun 2007 | A1 |
20070181692 | Barkan et al. | Aug 2007 | A1 |
20080036873 | Silver | Feb 2008 | A1 |
20080063245 | Benkley et al. | Mar 2008 | A1 |
20080166015 | Haering et al. | Jul 2008 | A1 |
20080167890 | Pannese et al. | Jul 2008 | A1 |
20080205714 | Benkley | Aug 2008 | A1 |
20080219521 | Benkley | Sep 2008 | A1 |
20080285802 | Bramblet et al. | Nov 2008 | A1 |
20090273668 | Mirtich et al. | Nov 2009 | A1 |
20100318936 | Tremblay et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
10012715 | Sep 2000 | DE |
10040563 | Feb 2002 | DE |
0896290 | Feb 1999 | EP |
0939382 | Sep 1999 | EP |
0815688 | May 2000 | EP |
1469420 | Oct 2004 | EP |
1734456 | Dec 2006 | EP |
2226130 | Jun 1990 | GB |
2309078 | Jul 1997 | GB |
60147602 | Aug 1985 | JP |
9-288060 | Nov 1997 | JP |
11-101689 | Apr 1999 | JP |
2000-84495 | Mar 2000 | JP |
2000-293694 | Oct 2000 | JP |
2000-322450 | Nov 2000 | JP |
2001-109884 | Apr 2001 | JP |
2001-194323 | Jul 2001 | JP |
2002-148205 | May 2002 | JP |
WO-9609597 | Mar 1996 | WO |
WO-0215120 | Feb 2002 | WO |
WO-02075637 | Sep 2002 | WO |
WO-03102859 | Dec 2003 | WO |
WO-2005050390 | Jun 2005 | WO |
WO-2005124719 | Dec 2005 | WO |
Entry |
---|
National Instruments, IMAQ Vision for LabVIEW User Manual, Part No. 371007A-01, Aug. 2004, pp. i, 1-1 to 2-9, 3-2, 3-3, 5-1 & 5-2, http://www.ni.com/pdf/manuals/371007a.pdf. |
Apple Computer Inc., Studio Display User's Manual [online], 1998 [Retrieved on Nov. 24, 2010]. Retrieved from the Internet<URL:http://manuals.info.apple.com/en/StudioDisplay—15inchLCDUserManual.PDF>. |
Stauffer, Chris et al., “Tracking-Based Automatic Object Recognition”, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambrid e MAhtt .//www.ai.mit.edu (2001),pp. 133-134. |
Cognex VisionPro, Getting Started—QuickStarL Tutorial, Cognex Corporation, 590-6560, Revision 3.5, pp. 69-94, May 2004. |
Cognex 3000/4000/5000 Image Processing, Revision 7.4 590-0135 Edge Detection Tool, 1996. |
CVL Vision Tools Guide, Cognex MVS-8000 Series, Chapter 5, Symbol Tool, CVL 5.4, Dec. 1999. |
Shane C. Hunt, Mastering Microsoft Photodraw 2000, May 21, 1999, Sybex Inc., pp. 131, 132 and 247. |
Cognex 4000/5000 SMD Placement Guidance Package, User's Manual, Release 3.8.00, Chapter 15, 590-6168, 1998. |
Avalon Vision Solutions, If accuracy matters in your simplest vision applications Use the Validator, 2006. |
Baumer Optronic, Technishche Daten, www.baumeroptonic.com, Product Brochure, 6 pages, Apr. 2006. |
Cognex Corporation, Screen shot of the CheckMate GUI Ver. 1.6, Jan. 2005. |
Cognex Corporation, Sensorpart FA45 Vision Sensor, Sep. 29, 2006. |
Vietze, Oliver, Miniaturized Vision Sensors for Process Automation, Jan. 2, 2005. |
Integrated Design Tools, High-Speed CMOS Digital Camera, X-Stream Vision User's Manual, 2000. |
IO Industries, High Speed Digital Video Recording Software 4.o, IO Industries, Inc.—Ontario, CA, 2002. |
Phillip, Kahn, Building Blocks for Computer Vision Systems, IEEE Expert, vol. 8, No. 6, XP002480004, pp. 40-50, Dec. 6, 1993. |
Matrox, Interactive Windows Imaging Software for Industrial and Scientific Applications, Inspector 4.0—Matrox imaging, pp. 8, Apr. 15, 2002. |
Olympus Industrial, Design Philosophy, i-speed, 2002. |
Olympus Industrial, High Speed, High Quality Imaging Systems, i-speed Product Brochure—Publisher Olympus Industrial, 2002. |
RVSI, Smart Camera Reader for Directly Marked Data Matrix Codes, HawkEye 1515 with GUI, 2004. |
Whelan, P. et al., Machine Vision Algorithms in Java, Chapter 1—An Introduction to Machine Vision, Springer-Verlag, XP002480005, 2001. |
Photron, USA, Product information for FASTCAM-X 1280 PCI, Copyright 2004, www.photron.com. |
Photon USA, Product Information for FastCAM PCI, Copyright 2004, www.photron.com. |
Photron, USA, Product Information for Ultima 1024, Copyright 2004 www.photron.com. |
Photron, USA, Product information for Ultima 512, Copyright 2004 www.photron.com. |
Photron, USA, Product Information for Ultima APX, Copyright 2004, www.photron.com. |
KSV Instruments Ltd. HiSIS 2002—High Speed Imaging System, www.ksvltd.fi, 2004. |
ICS 100, Intelligent Camera Sensor, SICK Product Information, SICK Industrial Sensors, 6900 West 110th St, Minneapolis, MN 55438, www.sickusa.com, Jan. 3, 2002. |
Matsushita Imagecheckers, NAiS Machine Vision, Matsushita Machine Vision Systems, 2003. |
Rohr, K. Incremental Recognition of Pedestrians from Image Sequences, CVPR93, 1993. |
Chang, Dingding et al., Feature Detection of Moving Images Using a Hierarchical Relaxation Method, IEECE Trans. Inf. & Syst., vol. E79-D, Jul. 7, 1996. |
Zarandy, A. et al., vision Systems Based on the 128X128 Focal Plane Cellular Visual Microprocessor Chips, IEEE. Mar. 2003, III-518-III-521. |
SmartCapture Tool, Feature Fact Sheet, Visionx, Inc., www.visionxinc com, 2003. |
Wilson, Andrew, CMOS/CCD sensors spot niche applications, Vision Systems, 2003. |
Matsushita LightPix AE10, NAiS Machine Vision, Matsushita Machine Vision Systems, 2003. |
Corke, Peter I., et al., Real Time Industrial Machine Vision, Electrical Engineering Congress Sydney, Australia, CSIRO Division of Manufacturing Technology, 1994. |
Marsh, R. et al., The application of Knowledge based vision to closed-loop control of the injection molding process, SPIE vol. 3164, Faculty of Engineering University of the West of England United Kingdom, 1997, pp. 605-616. |
Zarandy, Akos, et al., Ultra-High Frame Rate Focal Plane Image Sensor and Processor, IEEE Sensors Journal, vol. 2, No. 6, 2002. |
LM9630 100×128, 580 fps UltraSensitive Monochrome CMOS Image Sensor, National Semiconductor Corp., www.national.com, Rev. 1.0, Jan. 2004. |
Analog Devices, Inc., Blackfin Processor Instruction Set Reference, Revision 2.0, Part No. 82-000410-14, May 2003. |
ADSP-BF533 Blackfin Processor Hardware Reference, Analog Devices Inc., Media Platforms and Services Group, Preliminary Revision, Part No. 82-002005-01, Mar. 2003. |
National Instruments, IMAQVision Builder Tutorial, IMAQ XP-002356530, http://www.ni.com/pdf/manuals/322228c.pdf, Dec. 2000. |
Denis, Jolivet, LabView and IMAQ Vision Builder Provide Automated Visual Builder, LabVIEW, National Instruments, XP002356529, http://www.ni.com/pdf/csrna/us/JNDESWG.pdf, 2001. |
Chen, Y.H., Computer vision for General Purpose Visual Inspection: a Fuzzy Logic Approach, Optics and Lasers in Engineering 22, Elsevier Science Limited, vol. 22, No. 3, 1995, pp. 182-192. |
Di Mauro, E.C., et al., Check, a generic and specific industrial inspection tool, IEEE Proc.-Vis. Image Signal Process, vol. 143, No. 4, Aug. 27, 1996, pp. 241-249. |
Uno, T. et al., A Method of Real-Time Recognition of Moving Objects and its Application, Pattern Recognition: Pergamon Press, vol. 8, pp. 201-208, 1976. |
Hearing, N., et al., Visual Event Detection, Kluwer Academic Publisher, Chapter 2, Section 8, 2001. |
IBM, Software Controls for Automated Inspection Device Used to Check Interposer Buttons for Defects, IP.com Journal, IP.com Inc., West Henrietts, NY US, Mar. 27, 2003. |
Wright, Anne, et al., Congachrome Vision System User's Guide, Newton Research Labs, Manual Edition 2.0, Documents Software Version 26.0, Jun. 3, 1996. |
Stemmer Imaging GmbH, Going Multimedia With Common vision Blox, Product News, www.stemmer-imaging.de, Mar. 3, 2004. |
Cordin Company, Electronic Imaging Systems, High Speed Imaging Solutions: 200-500 Series Cameras, www.cordin.com, 2004. |
Bi-I, AnaLogic Computers Ltd., 2003. |
Bi-I, Bio-inspired Real-Time Very High Speed Image Processing Systems, AnaLogic Computers Ltd., http://www.analogic-computers.com/cgi-bin/phprint21.php, 2004. |
Cellular device processes at ultrafast speeds, VisionSystems Design, Feb. 2003. |
LaVision GMBH, High Speed CCD/CMOS Camera Systems, Overview of State-of-the-Art High Speed Digital Camera Systems, UltraSpeedStar, www.lavision.de, Sep. 24, 2004. |
10-K SEC Filing, iQ 180 Products, Adaptive Optics Associates 900 Coles Road, Blackwood, NJ 08012-4683, Dec. 2003. |
Laser Scanning Product Guide, Adaptive Optics Associates, Industrial Products and Systems, 900 Coles Road, Blackwood, NJ 08012-4683, Industrial Holographic and Conventional Laser 1D, Omnidirectional Bar Code Scanners, Mar. 2003. |
CV-2100 Series, Keyence America, http://www.keyence.com/products/vision/cv—2100—spec.html, High-Speed Digital Machine Vision System, Dec. 29, 2003. |
West, Perry C., High-Speed, Real-Time Machine Vision, Imagenation and Automated Vision Systems, Inc., 2001. |
Asundi, A., et al., High-Speed TDI Imaging for Peripheral Inspection, Proc. SPIC vol. 2432, Machine Vision Applications in Industrial Inspection IIII, Frederick Y. Wu, Stephen S. Wilson, Eds., Mar. 1995, pp. 189-194. |
Baillard, C., et al., Automatic Reconstruction of Piecewise Planar Models from Multiple Views, CVPR, vol. 02, No. 2, 1999, pp. 2559. |
Kim, Zuwhan et al., Automatic Description of Complex Buildings with Multiple Images, IEEE 0-7695-0813, 2000, pp. 155-162. |
Siemens AG, Simatic Machine Vision, Simatic VS 100 Series, www.siemens.com/machine-vision, Apr. 1, 2003. |
Bauberg, A.M. et al., Learning Flexible Models from Image Sequences, University of Leeds, School of Computer Studies, Research Report Series, Report 93.36, Oct. 1993, pp. 1-13. |
CCD/CMOS Sensors Spot Niche Application, PennWell Corporation, Vision System Design—Imaging and Machine Vision Technology, (2004). |
Cognex Corporation, “VisionPro Getting Started”, Revision 3.2, 590-6508, copyright 2003. |
Demotte, Donald “Visual Line Tracking”, Application Overview & Issues Machine Vision for Robot Guidance Workshop, (May 5, 2004). |
HawkEye 1515—Smart Camera Reader for Directly Marked Data Matrix Codes, RVSI 486 Amherst Street, Nashua, NH 03063, (2004). |
6,768,414, Jul. 20, 2004, Francis (withdrawn). |
Allen-Bradley, Bulletin 2803 VIM Vision Input Module, Cat. No. 2803-VIM2, Printed USA, 1991 (Submitted in 3 parts). |
Allen-Bradley, User's Manual, Bulletin 2803 VIM Vision Input Module, Cat. No. 2803-VIM1, 1987 (Submitted in 2 parts). |
Allen-Bradley, Bulletin 5370 CVIM Configurable Vision Input Module, User Manual Cat. No. 5370-CVIM, 1995 (Submitted in 3 parts). |
Cognex Corporation, 3000/4000/5000 Vision Tools, revision 7.6, p. 590-0136, Chapter 13, 1996. |
Cognex Corporation, Cognex 3000/4000/5000, Vision Tools, Revision 7.6, p. 590-0136, Chapter 10, 1996. |
Stauffer, Chris et al., “Tracking-Based Automatic Object Recognition”, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA http://www.ai.mit.edu, (2001),pp. 133-134. |
Number | Date | Country | |
---|---|---|---|
20100241981 A1 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10988120 | Nov 2004 | US |
Child | 12758455 | US |