Not applicable.
Not applicable.
The present invention relates to mark verification systems and more specifically to a method for calibrating a mark verification system wherein a verifier reader commences calibration when information in an image obtained by the verifier indicates that a calibration process should commence.
Many different industries require that marks be applied to manufactured components so that the components can be tracked during distribution, when installed or assembled, during maintenance processes, during use and after use. For instance, in the jet engine industry, jet engines include, among other components, turbines that include turbine blades that are manufactured in various size lots. Here, each turbine blade is marked when manufactured so that the blade can be tracked. Prior to the blade being disposed of, if any defect is ever detected in the blade, the defect can be traced back to a lot and a manufacturing process associated therewith so that any possible defects in other blades of the lot can be identified. Where marks are applied directly to components/parts, the marks are generally referred to as direct part marks (DPMs).
To directly mark components, known marking systems have been set up that include a marking station that applies a mark to a component. For instance, in at least some cases a marking station will apply a DataMatrix barcode symbol to each manufactured component where a DataMatrix symbol is a two-dimensional barcode that stores from 1 to about 2,000 characters. An exemplary DataMatrix symbol is typically square and can range from 0.001 inch per side up to 14 inches per side. As an example of density, 500 numeric only characters can be encoded in a 1-inch square DataMatrix symbol using a 24-pin dot matrix marking machine.
Despite attempts to apply marks that can be read consistently thereafter, sometimes mark application errors occur such that the mark cannot be subsequently consistently read and decoded properly. For instance, in some cases the surface to which the mark is applied may be somewhat discolored so that the contrast of the mark to the background of the application surface is not optimal. As another instance, in some cases where a mark consists of a plurality of dots, the dot sizes may be too large so that spaces there between are not perfectly discernible or the dot sizes may be too small to be recognized by some types of readers. As still other instances, axial non-uniformity of grid non-uniformity of the applied mark may be too great to reliably read. Many other mark metrics may be imperfect and may render mark difficult if not impossible to decode using many readers.
Whether or not a mark that has been applied to a component is readable often depends on the reading and decoding capabilities of a reader used to read and decode the mark. For instance, some relatively complex and expensive readers are capable of reading extremely distorted marks while cannot read marks that are not almost perfect.
To verify that applied marks are of sufficient quality to be read by readers at a specific facility (i.e., by the least sophisticated reader that is used at a specific facility), often marking systems will include, in addition to a marking station, a stationary verification station and at least a portion of a transfer line to transfer freshly marked components from the marking station to the verification station. Here, after a mark is applied to a component, the component is transferred via the transfer line to the verification station where the mark is precisely aligned with an ideal stationary light source and a stationary camera/mark reader that is juxtaposed such that a camera field of view is precisely aligned with the mark. After alignment, the reader reads the mark and attempts to verify code quality.
Verification can include several steps including decoding the mark and comparing the decoded information to known correct information associated with the mark that should have been applied. In addition, verification may also include detecting mark size, geometric mark characteristics (e.g., squareness of the mark), symbol contrast, quantity of applied ink, axial non-uniformity, grid non-uniformity, extreme reflectance, dot diameter, dot ovality, dot position, background uniformity, etc.
When a mark does not pass a verification process (i.e., mark quality is low), the marked component may be scrapped to ensure that the marked component does not enter distribution channels.
When a marked component passes a verification test at a manufacturing facility and is shipped to a client facility, when the component is received at a client's facility, it is often desirable for the client to independently verify that mark quality is sufficient for use with all of the readers at the facility and to decode the mark information to verify component type, to establish a record of received components, to begin a warranty period, etc. To this end, on one hand some known facilities include stationary verification systems akin to the verification stations at the component manufacturing facility described above that perform various verification processes including decoding to verify mark quality. To this end, known verification systems, like the known verification station described above, include some stationary mechanism (e.g., mechanical locking devices, sensors, etc.) for precisely aligning the mark on the component with a stationary ideal light source and a stationary camera so that the camera can generate an image of the mark and a processor can then glean mark verifying information from the mark.
On the other hand, other facilities employ hand held verifiers where a hand held mark reader is supplemented with verification hardware and software for performing verification processes.
Prior to using a verifier to verify mark quality the verifier has to be calibrated so that, when operated, optimal images are obtained for verification purposes. Known verifier calibration processes require that a verifier unit be linked to a computer (e.g., a personal computer (PC)) where the computer is used to manually set calibration target parameters (e.g., reflectance measurements, dimension measurement, etc.). After target parameters are manually set a calibration target is placed in the verifier field of view (FOV) and the PC is used to initiate a calibration operation in which a sequence of images of the calibration target are obtained using different verifier settings (e.g., exposure times). For each image, the verifier measures various characteristics of the image and compares those characteristics to the calibration target parameters. The image sequence continues with different verifier settings until the image characteristics match or are substantially similar to the target parameters after which the verifier settings or operating parameters are stored for use during subsequent verifier operations.
While verifiers can be calibrated using the known processes wherein the verifiers are linked to a computer (e.g., a PC), PC based calibration processes are burdensome as the calibration software has to be installed and set up on the PC prior to performing the calibration process and a PC has to be present to facilitate the process.
It has been recognized that calibration parameters and a calibration start command can be stored on a calibration tool (e.g., a calibration card) and can be obtained from the calibration tool at any time by a verifier assembly to cause the verifier assembly to start a calibration process. Here, after obtaining information from a calibration tool and decoding the information, a verifier processor can be programmed to identify a calibration start command and calibration parameters automatically. Where a calibration start command is identified, the verifier can start the calibration process using the calibration parameters. In this way a verifier calibration process can be commenced without requiring additional hardware and in a particularly efficient manner.
To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. However, these aspects are indicative of but a few of the various ways in which the principles of the invention can be employed. Other aspects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
Referring now to the drawings wherein like reference numerals correspond to similar elements throughout the several views and, more specifically, referring to
Hereinafter, the first and last columns and rows will collectively be refereed to as the mark border or frame. Within the space framed by the mark border, mark 10 includes cells 14 for storing data. When the data corresponding to cells 14 is read and decoded, a portion of the decoded data is used to codify a calibration start command while a second portion of the decoded data is used to codify calibration target parameters to be used by a verifier assembly to set verifier target parameters and for facilitating a verifier calibration process. For instance, where the decoded information includes 10,000 bits of information, the first 100 bits of information may include the calibration start command when the mark 10 is a calibration type mark. The verifier assembly in the present case is programmed to recognize when the decoded information specifies a calibration process and to use the calibration parameters from the decoded information to facilitate a calibration process.
Referring now to
Along a top surface of the barrel a plurality of LEDs 28 are mounted which can be illuminated to indicate the status of a calibration process and/or verification process. Speaker 26 is mounted in an external surface of housing 22 for generating sounds (e.g., one or a series of beeps or buzzes) for indicating process status. Trigger 24 is mounted along the trip portion of housing 22 to facilitate ergonomic activation thereof by pressing the trigger member 24 toward the grip portion of housing 22.
Each of processor 44, camera 40, memory 42 and lighting assemblies 25, 27 and 29 are mounted within housing 22. The camera may take several different forms but, in at least some embodiments, will include a charge-coupled device (CCD) camera or a CMOS camera. The camera is mounted within housing 22 so that a lens associated therewith is directed out the distal end 30 of the housing 22 for collecting images of object/codes located within a field of view 32 (see
The three lighting assemblies 25, 27, and 29 are provided for generating different light patterns or light consistent with three different types of illumination schemes to illuminate objects/marks within the field of view 32 of the camera. For instance, one light assembly 25 may generate direct light field illumination while a second assembly 27 may generate dark field illumination.
Referring to
Referring still to
Referring once again to
Where the decoded data includes a calibration start command, processor 44 examines the decoded data to identify the calibration target parameters including, for example, background surface reflectance of the card 34, code dimensions of the mark 10 on the card 34, etc. Once the calibration target parameters have been identified, processor 44 uses the target parameters to set various calibration parameters to be used during the calibration process and then performs other calibration steps to identify values for additional operating parameters to be set.
Referring now to
Continuing, at block 56, a verifier assembly user positions the calibration card adjacent the verifier assembly so that the calibration code or mark 10 is within the camera field of view. At block 58, the verifier assembly 44 monitors for activation of the verifier trigger member 24. At decision block 60, where the trigger member 245 has not been activated, control passes back up to block 56 where the loop including blocks 56, 58 and 60 is repeated. Once trigger member 24 is activated at block 60, control passes to block 64 in
Referring now to
Referring again to decision block 70 in
Referring now to
At block 124, processor 44 controls the first lighting assembly 25 and the camera 40 to obtain an image of the calibration mark in a manner consistent with the initial image collection settings. At block 126 after the image has been obtained, processor 44 identifies various characteristics of the obtained image and compares those characteristics to the calibration target parameters associated with the illumination scheme associated with the first lighting assembly. Where the characteristics of the obtained image are different than the compared target parameters, control passes to block 128 where the image collection settings are altered. For example, where an exposure time was initially set at a relatively short duration at block 122, at block 128, the exposure time may be extended by a small incremental duration. After block 128, control passes back up to block 124 where processor 44 obtains another image of the calibration mark using the camera 40 and the first lighting assembly 25. This looping processing including blocks 124, 126 and 128 continues until the characteristics of the obtained image at block 126 are optimal. Once an optimal image has been obtained, control passes to block 130 where processor 44 stores the parameters associated with the first illumination scheme.
After block 130, control passes to block 132 where processor 44 controls the camera 40 and the second lighting assembly 27 to obtain another image of the calibration mark using initial image collection settings corresponding to the second illumination scheme. At block 134, processor 44 determines whether or not characteristics of the obtained image at block 132 are different than the calibration target parameters and, where the characteristics are different, control passes to block 136 where the image collection settings (e.g., the exposure time) are altered. After block 136, control passes back up to block 132 where processor 44 controls lighting assembly 27 and camera 40 to obtain another image using the altered image collection settings. After block 132, control again passes to block 134. Once an image is obtained at block 132 that has characteristics that are at least substantially similar to the target parameters, control passes from block 134 to block 138 where processor 44 stores the parameters associated with the second illumination scheme. After 138, control passes down to block 140.
At block 140, processor 44 controls the third lighting assembly 29 and camera 40 to obtain an image corresponding to the third illumination scheme. At block 142, processor 44 identifies the characteristics of the obtained image and compares those characteristics to the target parameters. Where the characteristics of the obtained image are different than the target parameters, control passes from block 142 down to block 144 where the image collection settings (e.g., exposure time) are altered after which control passes back up to block 140. Eventually, characteristics of an image obtained at block 140 are substantially similar to the target parameters and at that point control passes from block 142 to block 146 where parameters associated with the third illumination scheme are stored. After block 146, control passes back up to block 58 in
In at least some embodiments the process described above with respect to
While the embodiment described above with respect to
Referring once again to
Consistent with the comments above,
While the above systems are described in the context of verification assemblies that obtain data through decoding marks that appear in 2D images, in at least some other embodiments it is contemplated that calibration information may be obtained from a calibration card or tool in other ways. For example, in at least some cases it is contemplated that a calibration assembly may, in addition to being able to obtain images of 2D codes, be able to obtain information from a card 34b via RF communication. For instance, in at least some embodiments, a card may be provided with an RF memory and transmitter device which, when activated, transmits or broadcasts calibration information including at least a subset of a calibration start command and calibration target parameters. In this case, the verifier assembly would include an RF antenna for exciting the memory/transmitter device and for receiving transmitted information therefrom. Here, in order to complete the calibration process, a calibration card or target (with or without a mark) would be required in some cases so that reflectance and other parameters can be analyzed.
Consistent with the above comments, referring to
Here, when trigger member 24 is activated, the camera associated with assembly 20a is used to obtain an image of 2D mark 102. In addition, either prior to the camera obtaining the image or in parallel therewith, RF antenna 110 generates a field that excites device 104 causing device 104 to transmit RF data which is received by antenna 110 and which is provided to the assembly 20a (not illustrated in
One or more specific embodiments of the present invention have been described above. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. For example, instead of providing a calibration start command in the code, in at least some embodiments a verifier may be programmed to decode information and recognize calibration target parameters and to commence a calibration process when the target parameters are recognized.
Furthermore, while the examples above describe systems where images are collected upon a trigger type activity, in at least some cases a hand held or stationary mounted verifier may continually collect images in a repetitive fashion. Here, each time an image is obtained the processor may be programmed to attempt to identify a calibration command in the decoded image information.
Thus, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims.
To apprise the public of the scope of this invention, the following claims are made:
Number | Name | Date | Kind |
---|---|---|---|
3868634 | Dolch | Feb 1975 | A |
3890597 | Hanchett | Jun 1975 | A |
4282425 | Chadima et al. | Aug 1981 | A |
4308455 | Bullis et al. | Dec 1981 | A |
4421978 | Laurer et al. | Dec 1983 | A |
4782220 | Shuren | Nov 1988 | A |
4866784 | Barski | Sep 1989 | A |
4894523 | Chadima et al. | Jan 1990 | A |
4948955 | Lee et al. | Aug 1990 | A |
4973829 | Ishida et al. | Nov 1990 | A |
5028772 | Lapinski et al. | Jul 1991 | A |
5120940 | Willsie | Jun 1992 | A |
5124537 | Chandler et al. | Jun 1992 | A |
5124538 | Lapinski et al. | Jun 1992 | A |
5155343 | Chandler | Oct 1992 | A |
5163104 | Ghosh et al. | Nov 1992 | A |
5166830 | Ishida et al. | Nov 1992 | A |
5187355 | Chadima et al. | Feb 1993 | A |
5187356 | Chadima et al. | Feb 1993 | A |
5192856 | Schaham et al. | Mar 1993 | A |
5262623 | Batterman et al. | Nov 1993 | A |
5262625 | Tom et al. | Nov 1993 | A |
5262626 | Goren et al. | Nov 1993 | A |
5276315 | Surka | Jan 1994 | A |
5276316 | Blanford | Jan 1994 | A |
5278397 | Barkan et al. | Jan 1994 | A |
5286960 | Longacre et al. | Feb 1994 | A |
5291008 | Havens et al. | Mar 1994 | A |
5296690 | Chandler et al. | Mar 1994 | A |
5304786 | Pavlidis et al. | Apr 1994 | A |
5332892 | Li et al. | Jul 1994 | A |
5378883 | Batterman et al. | Jan 1995 | A |
5412197 | Smith | May 1995 | A |
5418862 | Zheng et al. | May 1995 | A |
5420409 | Longacre et al. | May 1995 | A |
5446271 | Cherry et al. | Aug 1995 | A |
5455414 | Wang | Oct 1995 | A |
5461417 | White et al. | Oct 1995 | A |
5463214 | Longacre et al. | Oct 1995 | A |
5478999 | Figarella et al. | Dec 1995 | A |
5481098 | Davis et al. | Jan 1996 | A |
5486689 | Ackley | Jan 1996 | A |
5487115 | Surka | Jan 1996 | A |
5507527 | Tomioka et al. | Apr 1996 | A |
5514858 | Ackley | May 1996 | A |
5523552 | Shellhammer et al. | Jun 1996 | A |
5539191 | Ackley | Jul 1996 | A |
5550366 | Roustaei | Aug 1996 | A |
5557091 | Krummel | Sep 1996 | A |
5591956 | Longacre et al. | Jan 1997 | A |
5612524 | San't Anselmo et al. | Mar 1997 | A |
5635699 | Cherry et al. | Jun 1997 | A |
5646391 | Forbes et al. | Jul 1997 | A |
5657402 | Bender et al. | Aug 1997 | A |
5675137 | van Haagen et al. | Oct 1997 | A |
5682030 | Kubon | Oct 1997 | A |
5691597 | Nishimura et al. | Nov 1997 | A |
5723853 | Longacre et al. | Mar 1998 | A |
5739518 | Wang | Apr 1998 | A |
5742037 | Scola et al. | Apr 1998 | A |
5744790 | Li | Apr 1998 | A |
5756981 | Roustaei et al. | May 1998 | A |
5767497 | Lei | Jun 1998 | A |
5767498 | Heske et al. | Jun 1998 | A |
5777309 | Maltsev et al. | Jul 1998 | A |
5786586 | Pidhirny et al. | Jul 1998 | A |
5814827 | Katz | Sep 1998 | A |
5821520 | Mulla et al. | Oct 1998 | A |
5825006 | Longacre et al. | Oct 1998 | A |
5852288 | Nakazawa et al. | Dec 1998 | A |
5872354 | Hanson | Feb 1999 | A |
5877486 | Maltsev et al. | Mar 1999 | A |
5880451 | Smith et al. | Mar 1999 | A |
5889270 | van Haagen et al. | Mar 1999 | A |
5902988 | Durbin | May 1999 | A |
5914476 | Gerst et al. | Jun 1999 | A |
5920060 | Marom | Jul 1999 | A |
5929418 | Ehrhart et al. | Jul 1999 | A |
5932862 | Hussey et al. | Aug 1999 | A |
5936224 | Shimizu et al. | Aug 1999 | A |
5949052 | Longacre et al. | Sep 1999 | A |
6000612 | Xu | Dec 1999 | A |
6006990 | Ye et al. | Dec 1999 | A |
6021946 | Hippenmeyer et al. | Feb 2000 | A |
6053407 | Wang et al. | Apr 2000 | A |
6056198 | Rudeen et al. | May 2000 | A |
6075883 | Stern et al. | Jun 2000 | A |
6075905 | Herman et al. | Jun 2000 | A |
6078251 | Landt et al. | Jun 2000 | A |
6082619 | Ma et al. | Jul 2000 | A |
6088482 | He et al. | Jul 2000 | A |
6095422 | Ogami | Aug 2000 | A |
6123261 | Roustaei | Sep 2000 | A |
6152371 | Schwartz et al. | Nov 2000 | A |
6158661 | Chadima et al. | Dec 2000 | A |
6161760 | Marrs | Dec 2000 | A |
6176428 | Joseph et al. | Jan 2001 | B1 |
6189792 | Heske | Feb 2001 | B1 |
6206289 | Sharpe et al. | Mar 2001 | B1 |
6209789 | Amundsen et al. | Apr 2001 | B1 |
6234395 | Chadima et al. | May 2001 | B1 |
6234397 | He et al. | May 2001 | B1 |
6250551 | He et al. | Jun 2001 | B1 |
6289113 | McHugh et al. | Sep 2001 | B1 |
6298176 | Longacre et al. | Oct 2001 | B2 |
6340119 | He et al. | Jan 2002 | B2 |
6371373 | Ma et al. | Apr 2002 | B1 |
6398113 | Heske | Jun 2002 | B1 |
6405925 | He et al. | Jun 2002 | B2 |
6408429 | Marrion et al. | Jun 2002 | B1 |
6446868 | Robertson et al. | Sep 2002 | B1 |
6454168 | Brandt et al. | Sep 2002 | B1 |
6490376 | Au et al. | Dec 2002 | B1 |
6491223 | Longacre et al. | Dec 2002 | B1 |
6505778 | Reddersen et al. | Jan 2003 | B1 |
6512714 | Hanzawa et al. | Jan 2003 | B2 |
6513714 | Davis et al. | Feb 2003 | B1 |
6513715 | Heske et al. | Feb 2003 | B2 |
6561427 | Davis et al. | May 2003 | B2 |
6629642 | Swartz et al. | Oct 2003 | B1 |
6677852 | Landt | Jan 2004 | B1 |
6681151 | Weinzimmer et al. | Jan 2004 | B1 |
6698656 | Parker et al. | Mar 2004 | B2 |
6728419 | Young | Apr 2004 | B1 |
6761316 | Bridgelall | Jul 2004 | B2 |
6816063 | Kubler | Nov 2004 | B2 |
6913199 | He | Jul 2005 | B2 |
6919793 | Heinrich | Jul 2005 | B2 |
7044378 | Patel et al. | May 2006 | B2 |
7059525 | Longacre et al. | Jun 2006 | B2 |
7061524 | Liu et al. | Jun 2006 | B2 |
7066388 | He | Jun 2006 | B2 |
7070099 | Patel | Jul 2006 | B2 |
7108184 | Mase et al. | Sep 2006 | B2 |
7121467 | Winter | Oct 2006 | B2 |
7175090 | Nadabar | Feb 2007 | B2 |
7181066 | Wagman | Feb 2007 | B1 |
7219841 | Biss | May 2007 | B2 |
7498566 | Kasper et al. | Mar 2009 | B2 |
7604174 | Gerst et al. | Oct 2009 | B2 |
7609846 | Smith et al. | Oct 2009 | B2 |
7614554 | Mott et al. | Nov 2009 | B2 |
20010042065 | Sasaki et al. | Nov 2001 | A1 |
20010042789 | Krichever et al. | Nov 2001 | A1 |
20020044689 | Roustaei et al. | Apr 2002 | A1 |
20020171745 | Ehrhart | Nov 2002 | A1 |
20030006290 | Hussey et al. | Jan 2003 | A1 |
20030090586 | Jan et al. | May 2003 | A1 |
20030117511 | Belz et al. | Jun 2003 | A1 |
20030121978 | Rubin et al. | Jul 2003 | A1 |
20040026508 | Nakajima et al. | Feb 2004 | A1 |
20040091255 | Chase et al. | May 2004 | A1 |
20050180804 | Andrew et al. | Aug 2005 | A1 |
20050194447 | He et al. | Sep 2005 | A1 |
20050263599 | Zhu et al. | Dec 2005 | A1 |
20050275897 | Fan et al. | Dec 2005 | A1 |
20060022052 | Patel et al. | Feb 2006 | A1 |
20060027657 | Nunnink et al. | Feb 2006 | A1 |
20060027661 | Hosoi et al. | Feb 2006 | A1 |
20060050961 | Thiyagarajah | Mar 2006 | A1 |
20060131418 | Testa | Jun 2006 | A1 |
20060132787 | Mestha et al. | Jun 2006 | A1 |
20060249581 | Smith et al. | Nov 2006 | A1 |
20060285135 | Mestha et al. | Dec 2006 | A1 |
20080004822 | Nadabar et al. | Jan 2008 | A1 |
20080011855 | Nadabar | Jan 2008 | A1 |
20080019615 | Schnee et al. | Jan 2008 | A1 |
20080143838 | Nadabar | Jun 2008 | A1 |
20090090781 | Ye et al. | Apr 2009 | A1 |
20090121027 | Nadabar | May 2009 | A1 |
Number | Date | Country |
---|---|---|
10012715 | Sep 2000 | DE |
0571892 | Dec 1993 | EP |
0896290 | Oct 2004 | EP |
1469420 | Oct 2004 | EP |
1975849 | Jan 2008 | EP |
WO9613797 | May 1996 | WO |
WO0215120 | Feb 2002 | WO |
WO02075637 | Sep 2002 | WO |
WO2006052884 | May 2006 | WO |
WO2008118419 | Oct 2008 | WO |
WO2008118425 | Oct 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20080143838 A1 | Jun 2008 | US |