Not applicable.
The present invention relates to machine vision and more specifically to portable machine vision systems for collecting images of items or groups of items, identifying the item(s) and performing one or more secondary functions using the collected images.
When assembly lines were first configured to increase the rate of manufacturing processes, often the results of processes had to be measured by hand to ensure high quality products. For instance, where two parts had to be between two and 2⅛th inch apart, a line worker had to use some type of mechanical hand held measuring device to manually measure the distance between the two parts and confirm that the distance was within the required range. Similarly, where a bottle was to be filled to a specific height with detergent prior to shipping, a line worker would have to manually examine the bottle to ensure that the detergent level was within a required range. While manual inspection and measurement can have good results when performed correctly, in many cases such procedures were prone to error, were mundane for the persons performing the procedures, were relatively expensive and time consuming to implement and could only be performed on a random basis (i.e., on every 100th product that passed along the line).
More recently, camera or machine vision systems have been developed that eliminate many of the problems associated with manual inspection procedures. For instance, in one application a camera and a light source may be rigidly mounted adjacent a station along a manufacturing transfer line including clamps that, when a semi-finished product is moved to the station, clamp the product in a precise juxtaposition (i.e., at a precise distance from and along a precise trajectory to) with respect to the camera and light source. Here, the camera and light source may be positioned twelve inches from and normal to a space between first and second product components such that an image of the clamped product generated by the camera shows the first and second components separated by the space having a space dimension. In this case, the image may be provided to a processor for identifying the space dimension and for comparing the identified dimension to a required space dimension between the first and second components.
To determine the space dimension in the image, the processor is programmed to scale the space defined by the first and second components in the image appropriately assuming the precise juxtaposition of the camera to the components (i.e., assuming a normal camera trajectory and twelve inches between the camera and the space). Where the identified dimension is different than an expected and required dimension the processor may be programmed to reject the part, to store the difference, to indicate the difference, to suggest a correction to eliminate the difference, etc.
Other imaging applications in addition to dimension determination include color verification, defect detection, object/pattern verification, object recognition, assembly verification and archival processes for storing images and/or other correlated information (e.g., the results of color verification processes, defect detection processes, etc.). In each of the applications listed above, camera images are processed in a different fashion to facilitate different functions but the precise juxtaposition restrictions required to generate meaningful/useful images still exist.
Systems like the one described above work well in the context of mechanical components that ensure specific relative juxtapositions of a camera, a light source and a product or product feature being imaged. Hereinafter systems that rigidly mount cameras and light sources in specific positions with respect to other components that restrict product position during imaging will be referred to as “constrained vision systems”.
Unfortunately, there are a huge number of manufacturing and other applications where constrained vision systems are not easily employed but where the advantages of machine vision described above (i.e., accuracy, speed, low cost, consistency, etc.) would be useful. For instance, in the case of jet engine manufacturing, engines are typically assembled in relatively small numbers at assembly stations. Here, an engine often includes a large number of different components and the dimensions and spacings between many of the components must be precise in order for the engine to operate properly. Measurement of each of the required dimensions and spacings would require an excessive number of stationary cameras and light sources and would therefore be too costly for most applications. In addition, even if costs associated with required stationary cameras and sources were not an issue, placement of the cameras and sources adjacent the assembly station and at proper positions for obtaining required images would be impossible given the fact that assembly personnel need to move freely within the assembly station space during the assembly process.
Moreover, in many applications the product (e.g., a jet engine) being assembled may be mounted for rotation about one or more axis so that assembly personnel can manipulate the product easily to obtain different optimal vantage points for installing components. Here, where product position is alterable, even if a stationary camera were provided adjacent an assembly station, precise positioning of the camera with respect to the product would be difficult to achieve as the product is moved to optimize position for installation purposes. Where camera position with respect to the product/feature being imaged is unknown or is inaccurately assumed, resulting images can be mis-scaled such that comparison to required ranges is inappropriate. For instance, assume that a required distance between two features must be within a range of five and 5⅛th inches and a machine vision system assumes that a camera is twelve inches from and normal to a space between the two features when an image is obtained. In this case, when an image is obtained, the system may be programmed to identify the two features and measures the dimension there between. Thereafter, the system scales the measured dimension assuming the image was taken from twelve inches and at an angle normal to the space between the two features. In this case, if a camera used to obtain an image is at an angle (e.g., 15 degrees) with respect to normal and/or is fifteen inches from the space as opposed to twelve inches, the dimension calculated by the system will be different than the actual dimension and an error will likely be indicated even if the actual dimension is within the required range.
Another exemplary application where constrained vision systems are not easily employed but where the advantages of machine vision described above would be useful is in small manufacturing plants. For instance, in a small metal working facility there may be 1000 different measurements that routinely need to be taken on various products. Here, while at least some of the 1000 measurements may need to be made hundreds of times each year, the volume of product associated with even the most routine measurements may not justify the costs associated with even a single stationary machine vision system. In these cases the advantages of machine vision are typically foregone and all measurements end up being manual.
Thus, it would be advantageous to have a system and methods whereby the advantages of machine vision systems could be employed in many applications in which such advantages have heretofore been foregone for various reasons and where costs associated therewith can be reduced appreciably. More specifically, it would be advantageous to have a system and methods whereby one camera and light assembly could be used to obtain many different product/feature images of one or a plurality of products and where the system could automatically identify product types, image types and specific features associated with image and product types and could then perform product and image specific applications or supplemental processes. Furthermore, it would be advantageous to have a system and methods that provide guidance to a system operator for generally optimally positioning a camera/light source with respect to products and features for obtaining suitable images for processing.
It has been recognized that many of the advantages associated with machine vision in the context of constrained vision systems above can be extended to other applications by providing a camera and light source on a hand held portable device (hereinafter an “HHD”). Here, the HHD can be positioned in any relative juxtaposition with respect to a product or product feature to obtain images thereof for analysis by system software. Where many different product characteristics need to be imaged for examination purposes, the camera can be positioned in several different positions to obtain the images. In at least some cases system software can be programmed to recognize an obtained image as similar to a specific one of a plurality of optimal stored images that includes specific features of interest so that, when the image is obtained, the system can automatically perform different processes on the image information.
In at least some cases the HHD may include a guidance mechanism for helping a user position the HHD with respect to a product or feature for obtaining an image suitable for specific processes. In some cases guidance may be provided by simply transmitting a specific light pattern toward a product to show the field of camera view. In other cases the guidance may be “active” and include indications that direct an HHD user to move the HHD in specific directions relative to the product or feature being imaged. Where active guidance is provided, the HHD may have access to optimal product images stored in a database. Here, the HHD may obtain an image, compare the image to one or more of the optimal images to identify positional changes that would result in a more optimal positioning of the HHD and then provide guidance to move left, move right, move up, move down, move forward, move backward, adjust pitch, adjust roll, etc.
Where the HHD is to be used to obtain images of two or more different product types, an identification tag (e.g., bar code, RFID tag, etc.) may be placed on each of the products with information useable to identify the product and the HHD may also be equipped with a tag reader for obtaining information form the tags. In this case, prior to or after obtaining an image of a product, the HHD may be used to obtain information from the tag so that the product type can be determined. The HHD may be programmed to perform different functions for each different product type. For instance, for a first product type the HHD may be programmed to guide the HHD user to obtain two different images that are similar to optimal stored images and to perform various measurements of features in the images while, for a second product type the HHD may be programmed to guide the HHD user to obtain five different images from various orientations and to perform various measurements of features in the five images. After the tag information is obtained the HHD can perform product type specific processes and provide product type specific instructions.
Consistent with the above comments, at least some embodiments of the present invention include a method for use with a portable device including a processor, the method comprising the steps of providing an identification tag on at least one item that includes item identifying information, obtaining item identifying information from the identification tag via the portable device, obtaining at least one image of the at least one item via the portable device and performing a supplemental process using the identifying information and the at least one image.
In addition, some embodiments of the invention include a method for use with a database and a portable device including a processor, the method comprising the steps of (a) storing information in the database regarding at least one optimal relative juxtaposition of the portable device with respect to a first item type to be imaged, (b) positioning the portable device proximate an instance of the first item, (c) obtaining at least one intermediate image of the first item instance; an (d) providing guidance information indicating relative movement of the portable device with respect to the first item instance to move the portable device toward the at least one optimal relative juxtaposition with respect to the first item instance.
Here, in some cases the method may further include the step of, prior to step (d), examining the intermediate image to identify position of the portable device with respect to the first item instance, the step of providing guidance including providing guidance as a function of the intermediate image examination. Steps (b) through (d) may be repeated until the portable device is at least substantially in the at least one optimal relative juxtaposition with respect to the first item instance.
At least some embodiments of the invention include a method for use with a database and a portable device including a processor, the method for obtaining information associated with a subset of a plurality of different item types, the method comprising the steps of providing an identification tag on at least one item that includes item identifying information, obtaining item identifying information from the identification tag via the portable device, identifying the item type from the item identifying information, obtaining at least one image of the at least one item via the portable device, identifying at least one supplemental process to be performed on the obtained image wherein the supplemental sub-process is identified at least in part as a function of the item type and performing the at least one supplemental process on the obtained image.
In addition, according to some inventive aspects, some embodiments include a method for use with a database and a portable device including a processor, the method for performing functions associated with different items, the method comprising the steps of providing identification tags on each of a plurality of items, each tag including identifying information useable to specifically identify an associated item, obtaining item identifying information from at least one identification tag corresponding to a specific item via the portable device, obtaining other information associated with the specific item including at least one image of the specific item via the portable device, performing supplemental processes on the other information associated with the specific item, monitoring the portable device for a transition indication and, when a transition indication is received, halting the supplemental processes.
Some embodiments include a system for use with items that include identification tags where each tag includes item identifying information, the apparatus comprising a portable housing. a processor mounted within the housing, a tag information obtainer supported by the housing for obtaining information from the tags and a camera supported by the housing, wherein, the processor runs a program to, when information is obtained from a tag via the obtainer and an image is obtained via the camera, perform a supplemental process on the image as a function of the information obtained from the tag.
Moreover, some embodiments of the invention include an apparatus for use with items that include identification tags where each tag includes item identifying information, the apparatus comprising a portable housing, a processor mounted within the housing, a tag information obtainer supported by the housing for obtaining information from the tags and a camera supported by the housing, wherein, the processor runs a program to, when information is obtained from a tag associated with a specific item via the obtainer, perform supplemental processes associated with the specific item on images obtained by the camera until a transition indication is received and, when a transition indication is received, halting the supplemental processes associated with the specific item.
Furthermore, some embodiments include a method for use with a portable device including a processor, the method comprising the steps of identifying at least one item using the portable device, obtaining at least one image of the at least one item via the portable device and performing a supplemental process on the at least one image as a function of the identity of the item.
According to one aspect the invention includes a method for marking a product for use with a camera including a field of view, the method comprising the steps of providing an identification tag on the product wherein the tag is machine readable to obtain information associated with the product and providing a source mark on the product spatially proximate the tag such that both the tag and the source mark can be simultaneously located within the camera field of view.
In addition, some embodiments include a method for use with a portable device including a processor, the method comprising the steps of associating an identification tag with at least one item that includes item identifying information, obtaining item identifying information from the identification tag via the portable device, obtaining at least one image of the at least one item via the portable device and performing a supplemental process using the identifying information and the at least one image. Here, the step of associating may include providing the identification tag on the item or providing the tag in a booklet along with human distinguishable information associated with the item.
These and other objects, advantages and aspects of the invention will become apparent from the following description. In the description, reference is made to the accompanying drawings which form a part hereof, and in which there is shown a preferred embodiment of the invention. Such embodiment does not necessarily represent the full scope of the invention and reference is made therefore, to the claims herein for interpreting the scope of the invention.
One or more specific embodiments of the present invention will be described below. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
Referring now to the drawings wherein like reference numerals correspond to similar elements throughout the several views and, more specifically, referring to
In addition to storing programs 15 run by server 13, database 14 also stores information in a product/function database 17 corresponding to items or products for which some process or method may be performed by system 10. In addition, in at least some embodiments of the present invention, database 14 also stores results or at least a subset of the results in a results database 19 that occur after programs 15 have been performed.
In at least some embodiments of the present invention, interface 12 includes a display screen 22 and some type input device such as keyboard 24. Other types of input devices are contemplated including a mouse, a track ball, voice recognition hardware, etc. Using work station 12, a system operator can perform various tasks, depending upon the method being performed by server 13, to facilitate method steps to be described hereinafter.
Access point 16, as well known in the art, is a wireless transceiver that is capable of receiving wireless signals (e.g., 802.11b, blue tooth, etc.) and transmitting similarly coded wireless information within the area surrounding access point 16. When access point 16 receives information, the information is decoded and transmitted to server 13 via network 18. Similarly, server 13 can transmit information wirelessly within the space surrounding access point 16 by transmitting the information first via network 18 to access point 16 and causing access point 16 to transmit the information within the surrounding area.
Referring still to
Referring still to
Referring still to
Referring to
Referring now to
At block 111, HHD 20 is provided that includes a camera/tag reader as described above. At block 112, identification tags are provided on items including blade 200 in
According to other embodiments of the present invention, instead of requiring a system user to separately obtain an item image and identification information from a tag 202 or the like, a single image may be obtained which includes features of an item that are of interest as well as the tag information. For instance, referring once again to
Consistent with the comments in the preceding paragraph, referring now to
In addition to storing correlated images and item/product identifying information, other supplemental functions are contemplated that may be performed via HHD 20. For instance, in at least some cases, it is contemplated that HHD 20 may perform programs to analyze obtained images to identify regions of interest in the images, visually distinguishable product features within regions of interest and characteristics of those features. For example, referring again to
As another example, where optimal product dimensions have been specified in a database (e.g., in memory 25), HHD processor 21 may be programmed to, after item/product dimensions have been calculated, compare the calculated values to the optimal values and store results of the comparison in a correlated fashion with item identifying information in HHD memory 25.
Referring still to
Similarly, in at least some cases, HHD 20 may simply obtain tag identifying information and product images, correlate and store the identifying information and images and download the correlated information to server 13 in batch either wirelessly or via a hard wire connection (e.g., a USB port), additional supplemental processes being performed thereafter via server 13.
Where work station 12 or at least display 22 is proximate the location of blade 200 during the imaging process and server 13 performs at least parts of the supplemental processes, server 13 may provide at least a subset of the process results to an HHD user via display 22. For instance, product images may be provided via display 22 as well as product identifying information (e.g., “rotor blade type 00-0001”). As another instance, where product features and characteristics are calculated from an image, the calculated values may be provided via display 22 either in a list form or, where an obtained image is generated, as markings on the generated image. As still one other instance, where calculated values are different than expected or optimal values, the differences may be indicated via display 22 in some fashion. Moreover, where calculated values are within an expected or acceptable range, an affirmation of the values as acceptable may be provided via display 22. The above feedback functions may also be performed/facilitated via display 22 where HHD processor 21 performs most of the supplemental functions and then transmits results to the display via point 16 and network 18.
Referring now to
Referring still to
Here it is contemplated that the instructions for identifying features would be provided or specified during some type of commissioning procedure. For instance, in at least some cases an HHD 20 may be used to obtain an image of a product where the image includes the features (e.g., edges, curvatures, etc.) of interest. Next, the obtained image may be presented via display 22 and interface tools (e.g., a mouse, trackball, etc.) may be used to select image features such as edges. Here, software run by server 13 may help the user charged with commissioning to distinguish features. For instance, where a mouse controlled cursor is moved to edge F7 (see again
Referring again to
Here again it is assumed that the feature functions are specified during a commissioning procedure. For instance, after features of interest have been identified and rules for identifying the features in obtained images have been developed and stored, server 13 may provide an interface for grouping image features into subsets (e.g., 82, 90, 91, etc) and for selecting associated feature functions. For example, referring again to
Hereinafter, unless indicated otherwise, it will be assumed that HHD processor 21 performs most of the supplemental processes. Nevertheless, it should be appreciated that, in at least some cases, some or most of the process steps could also be performed by server 13 in communication with HHD 20 via access point 16.
Referring now to
Continuing, at block 154, processor 21 compares image characteristics to required characteristics to identify any differences. At block 156, processor 21 provides feedback regarding feature characteristics. Here, in at least some embodiments, it is contemplated that to provide feedback, processor 21 transmits information via transceiver 34 to access point 16 and on to server 13 which provides feedback via display screen 22. In at least some cases, after block 156, either processor 21 or server 13 may be programmed to correlate and store item identification information, features and characteristic information including any differences between measured and required characteristics in the results database 19 (see again
While the systems described above have several advantages, such systems also may have some short comings in the context of certain applications. For example, in general, the systems described above rely on the assumption that an HHD user will be extremely familiar with the products being imaged, the supplemental functions or processes being performed via the system and advantageous relative juxtapositions (e.g., angle, spacing, pitch of HHD, etc.) between the HHD and the products/features being imaged from which to obtain the images. Indeed, to obtain an image including features of interest for performing the supplemental functions, the HHD user would have to know which features are of interest and generally optimal angles and distances of the HHD to the product being imaged to ensure that suitable images are obtained for accurately identifying the features and feature characteristics. Here, while filtering and software compensation schemes may be used to compensate for minor differences between angles and distances of the camera to the features of interest, larger differences may not be accurately discernable and hence may not be able to be accurately compensated. Where an HHD user is not familiar with products, advantageous imaging juxtapositions and product features of interest however, the systems described above would be difficult to use at best.
To deal with the alignment problem described above, in at least some cases it is contemplated that system 10 may be programmed to help an HHD user position HHD 20 such that an image obtained thereby is optimal for performing other functions such as dimension measurement, color identification, and so on. To this end, referring to
Here, it is assumed that an optimal image of a product has been captured during a commissioning procedure and stored in database 17 for each of the image identifiers listed in column 76. The idea here is that, when HHD 20 is used to obtain an image of a product, the obtained image can be compared to one or more optimal images to identify HHD movements required to move the HHD into an optimal juxtaposition with respect to the product being imaged or at least to identify if the HHD is within an acceptable range of optimal relative juxtapositions. In this case, an HHD position within an optimal range of relative juxtapositions will include a position wherein the characteristics that can be calculated from an image obtained from the position will be sufficiently accurate for the application associated with the characteristics. Here, sufficiency of accuracy is a matter of designer choice.
Feedback can be provided either via the HHD or some external device (e.g., display 22, see again
Referring again to
Referring now to
Referring now to
Continuing, at block 172, HHD 20 is used to obtain an intermediate image of blade 200. At block 174, processor 21 compares the compared and optimal images to identify the optimal image that is most similar to the obtained image. In the present case, because there is only one optimal image in column 76 associated with the rotor blade identification number 00-0001, the single optimal image associated with identifier 73 is identified at block 174. In other cases where two or more optimal image identifiers are listed in column 76 (e.g., in the case of blades have ID number 00-0002), processor 21 selects one of the optimal images for performing the guidance process at block 174.
After block 174, at block 176, processor 21 determines whether or not the obtained image is substantially similar to the most similar optimal image (e.g., the obtained image is suitable for sufficiently accurately performing the feature functions associated therewith). Where the obtained image is not substantially similar to the optimal image, control passes to block 178 where processor 21 provides guidance via the feedback devices (e.g., the LEDs 36, 38 and 40 in
Referring once again to block 176, once the obtained image is substantially similar to one of the optimal images, control passes from block 176 to block 179 where the most recent intermediate image is used as a final image. In addition, feedback indicating alignment may be provided via LEDs 36, 38 and 40 (see again
Here, once again, in at least some embodiments, feedback will be provided by transmitting feedback information to access point 16 and on to server 13 where server provides the feedback information via display 22. In other embodiments feedback may be provided via an audio speaker 39 (see again
Referring now to
Similarly, the orientation arrows indicate how the barrel of the HHD 20 should be moved to adjust pitch and roll. For instance, when arrow 66 is illuminated, the HHD barrel 29 should be moved forward with respect to the HHD handle 27 thereby affecting a forward pitch movement. Similarly, where arrow 68 is illuminated, the barrel 29 should be rotated to the right with respect to handle 27 affecting a roll of the HHD.
In at least some other embodiments it is contemplated that a feedback arrangement may include a small flat panel display screen mounted directly on HHD 20. To this end, an exemplary HHD display screen 32c is best illustrated in
Where an HHD 20 is equipped with its own feedback display arrangement 32c, relatively detailed instructions can be provided to an HHD user for obtaining optimal images. To this end, referring once again to
In the illustrated example, the instructions are provided in a step-wise fashion to first direct the HHD user with respect to pitch and roll, left and right and up and down movement and then to instruct the user with respect to forward and backward movement. Thus, in
Referring to
Referring once again to
In at least some cases it is contemplated that where multiple optimal images are associated with a single item identification number and an obtained image is similar to two or more of the optimal images, processor 21 may provide alternate instructions and allow the HHD user to obtain any one of the optimal images through different movements of the HHD 20. For the purposes of the next example, referring again to
In this case, processor 21 may be programmed to provide alternate instructions 248 guiding the user to either align the HHD to obtain the optimal edge view image or to obtain the optimal plan view image. Referring to
In addition to the supplemental functions described above, one additional and particularly advantageous supplemental function that may be performed by system 10 includes a part verification function. To this end, in many cases identification tags are provided on products or items so that product consumers can verify that the items are genuine parts from specific manufacturers that are known to provide high quality products. Thus, for instance, to ensure that a rotor blade was manufactured by Harbinger Aero Parts, a reputable parts manufacturer, Harbinger Aero Parts may provide identification tags on each one of their rotor blades that can be used by an end user to attempt to verify authenticity. Unfortunately, part counterfeiters have begun to copy the information on identification tags and place similar tags on counterfeit parts so that end users can no longer verify part authenticity by tag information alone.
To deal with the above problem, it has been recognized that a source mark in the form of a trademark or the like can be provided either as part of an identification tag or proximate an identification tag such that when an image of the identification tag is obtained, an image of the source mark is also obtained. In this case, in addition to verifying an item identification number in an attempt to verify authenticity, the source mark can be compared to a trademark of the trusted supplier and, if the source mark is different than the trusted supplier's trademark, the part can be recognized as a counterfeit part. In addition, if the source mark is not present in the obtained image, the part can be recognized as a counterfeit part. While part counterfeiters could copy source marks as well as identification tags and use those marks and tags together in an attempt to continue to counterfeit products, copying a source mark like a trademark would be a separate trademark violation and would be relatively easily actionable.
Consistent with the comments above, referring to
In at least some cases it is contemplated that, after an HHD is used to obtain product identifying information from a tag that is placed on or associated with a particular product, the HHD will continue to be associated with the particular product for the purpose of performing supplemental functions until a transition indication or event occurs such as either new tag information being obtained from a different tag or the HHD user performing some process to indicate that the association between the HHD and the product should be broken. Thus, for instance, after identification information is obtained from a tag, HHD 20 may be used to obtain ten different optimal images of the product associated with the identification tag, the HHD processor 21 performing a different subset of supplemental functions for each one of the obtained images without having to reassociate with the product each time a new image is obtained. At any time, if the HHD is used to obtain information from a different ID tag, HHD 20 association with the previous product is broken and a new HHD product association is formed with the new product.
Consistent with the comments in the previous paragraph, referring now to
While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. For example, other supplemental processes are contemplated. For instance, kit component verification is contemplated. To this end, referring to
In addition, systems are contemplated wherein a portable HHD may be associated with an item in ways other than obtaining tag information from a tag that resides on the item. For instance, in at least some cases a tag may reside in a parts booklet or the like where the tag is spatially associated (e.g., on the same page in the booklet) with an image of the item/product. For instance, a tag for a rotor blade as described above may be presented next to an optimal image. Here, an HHD user could obtain tag information from the booklet to associate the HHD temporarily with an item and then perform image obtaining processes and other supplemental processes using the HHD as described above. Similarly, an end process or disassociation tag may be included in a booklet or elsewhere in proximity to where an HHD is used to disassociate the HHD with an item that the HHD is currently associated with.
Moreover, while some systems are described above as including guidance capabilities, in at least some cases it is contemplated that no guidance functions may be provided. Similarly, while some systems are described above that include association via a tag, in some cases such association may not be supported. For instance, where an HHD is used with a single item type, the HHD may be preprogrammed for use with the single item type and the supplemental processes may all be the same regardless of the instance of the item that is imaged. Here there is still value in the inventive concepts as different processes may be performed depending on which image is obtained and depending on the quality of the images obtained.
Furthermore, various types of commissioning procedures are contemplated wherein items having known standard characteristics are imaged to generate at least one optimal image for each item and then features on the items are identified as well as feature characteristics of interest and acceptable ranges of characteristic values. The present invention may be used with any type of commissioning procedure that generates suitable database information.
In addition, while the inventive aspects have been described above in the context of an HHD including a camera/sensor capable of obtaining both ID tag information as well as images of products/items, it should be appreciated that other HHD configurations are contemplated where the camera and the tag reader are separate HHD components. Here, note that the tag reader may take several forms including a bar code reader, an optical character recognition reader, an RF sensor, etc.
Moreover, instead of storing optimal images to facilitate guidance, other types of information that reflect optimal images may be stored. For instance, general orientation of edges of a product may be stored along with ranges of dimensions for comparison to similar features and dimensions in obtained images.
In still other embodiments it is contemplated that features required to perform a vision process may not be able to be captured in a single image. For instance, where a container (e.g., box) or part is relatively large, it may be difficult or essentially impossible to obtain a single image of the container or part that includes the features that define a required dimension (e.g., a box width) or a vision process may require information that can only be obtained from images of multiple sides of the box that cannot be imaged in a single image. As another instance, a standard form may include relatively fine print or small characters so that resolution of form information in a single image is insufficient for accurate reading. As still one other instance, where a VIN number has to be read through a windshield of an automobile dashboard it may be that the complete number cannot be read easily via an image obtained from one orientation because of glare off the windshield.
In these examples and many more, according to one aspect of at least some embodiments of the invention, multiple images can be obtained of an item or object to be imaged using a hand held device and information from the multiple images can be used together to complete a machine vision process (e.g., identifying a complete VIN number, obtaining all required information from a form, obtaining spatially associated images of required box features, etc.).
Referring now to
In addition, processor 21 may be programmed to search for common box features among images so that relative juxtapositions of features of interest can be surmised. For instance, referring again to
Here, the processor 21 may be programmed to recognize that an image associated with FOV 42b can be used to fill in the gap between images associated with FOVs 42a and 42c. Thus, for instance, once images including edges 440 and 442 are obtained and processor 21 recognizes that there are no common features in the two images, processor 21 may be programmed to identify other features in the two obtained images and for other images (e.g., the image associated with FOV 42b) that include the other features until a complete chain of image features linking the two edges together can be identified. Thereafter, the dimension between the two edges 440 and 442 can be determined by identifying dimensions between the chain of features and adding up the dimensions between the features. For instance, in
In some embodiments HHD 20 may be able to measure the distance between the HHD and the surface 432 of the box being imaged and may therefore be able to determine the instantaneous size of the camera FOV and to scale image features as a function of the distance. In other cases the processor 21 may be programmed to recognize one or more features of known dimensions in one or more images and to use that information to determine other dimensions within obtained images. For instance, in
The dimensions of other features in an image can be determined in a similar fashion. For instance, where dimension L4 is known to be one inch and the dimension L1 in
In the above example where the processor 21 is to determine the box length dimension, the processor 21 may be programmed to determine the dimension in the fastest accurate way possible. For instance, where one image of the box will suffice because both edges 440 and 442 are in the image, processor 21 may use the single image. Where one image does not include the two edges, processor 21 may be programmed to identify two images where each of the two images includes one of the edges 440 and 442 and each of the two images includes at least one common feature that can be used to chain or link the two edges spatially together. Where no two images include required features, processor 21 may start to attempt to chain different distinguishing features in the image together until the dimension is determined.
In the above example, HHD 20 may provide feedback to a user indicating when additional images need to be obtained and to indicate when a required vision process has been completed. For instance, in some cases, referring again to
As another example, the feedback assembly 32, 39 may include a display screen 469 via which various feedback information is provided. For instance, where the dimension between box edges is required, the HHD may simply textually indicate that a box length dimension is required and that the user should obtain images including left and right box edges. Here, when an image including one edge is obtained the HHD may textually indicate that one edge has been imaged and that an image of the opposite edge should be obtained thereby guiding the user to obtain another required image.
In at least some cases, after at least one required image feature is identified and when additional features need to be captured in images, processor 21 may be programmed to anticipate as to which way the HHD should be moved to obtain additional images and may provide feedback or guidance to a user via the display. For instance, where one edge is identified and a second edge needs to be imaged to complete a dimension measurement process, processor 21 may be able to examine image data and recognize that the edge already imaged is a left edge and that the additional edge will be the right edge. In this case the HHD may illuminate an arrow (see again
Referring again to
In some embodiments, where features of an item to be imaged are generally known prior to capturing images of the item, the general known features may be used to generate a “model” of the item being imaged and to present that model via the display 469 with features of the model to be imaged visually distinguished in some fashion. For instance, referring to
Referring again to
Gyroscope 481 enables processor 21 to determine whether or not HHD 20 has been reoriented during an image capturing process and to properly associate image features with known item features. Thus, in the above example where the dimension between edges 440 and 442 is required, where HHD 20 obtains first and subsequent images of edges 440 and 443, processor 21 may be programmed to determine which box edges were likely imaged and would not halt the image collecting process until a different image that is likely to include edge 442 is obtained. Moreover, where processor 21 is unable to determine which edges have been imaged, processor 21 may stall ultimate processing (e.g., dimension calculation) until more reliable images and feature information is obtained. Thus, for instance, where HHD 20 obtains an image including edge 440 in a first image and obtains a subsequent image including edge 442 where HHD has been reoriented from pointing the FOV generally vertically to pointing somewhat horizontally when the first and subsequent images are obtained, processor 21 may be unable to distinguish which edge 442 or 443 was imaged given the images and the reorientation of HHD 20. Here, processor 21 would simply instruct the HHD user to continue to obtain images until more reliable images are obtained.
In addition to a gyroscope, at least some HHD embodiments may include an accelerometer 491 that can be linked to processor 21 to provide information regarding movement of the HHD within a space left, right, forward, backward and up and down. In some cases the gyroscope 481 and the accelerometer 491 information may be used to better or more accurately determine the spatial relationships between obtained images. In addition, processor 21 may be programmed to independently use image data to attempt to ascertain relative orientation of the HHD 20 and may compare that information to information generated via the gyroscope and/or the accelerometer to more accurately determine spatial relationships.
Referring now to
Referring specifically to
In the form reading embodiment, as in the box dimensions embodiment described above, HHD 20 may provide feedback indicating when additional images should be obtained and when the vision process has been completed. Again, the feedback may include illumination of a green LED when additional images should be obtained and illumination of a red LED when the process has been completed. In addition, where form format is known, processor 21 may be programmed to provide a mockup image of the form highlighting portions of the form that have already been imaged and thereby indicating other image portions that are required. To this end see exemplary image 464i in
Referring now to
In the embodiment of
Referring now to
Referring still to
Referring yet again to
At decision block 506 in
Referring now to
In at least some embodiments it is contemplated that a required feature database may include data including anticipated relative juxtapositions of required features on an item to be imaged. For example, in the case of the box application described above with respect to
Referring also to
Three exemplary applications are described above in which an HHD 20 can be used to obtain multiple images of an item to be imaged and can identify a plurality of features within the item that are required to complete vision processes. In at least some embodiments it is contemplated that a single HHD may be programmed to perform a plurality of different applications within a facility. For example, consistent with the applications above, a single HHD may be programmed to perform any of the box dimension calculating application, the form imaging application and the VIN number reading application. In at least some embodiments, where processor 21 can perform multiple applications, it is contemplated that processor 21 may be able to automatically identify which one of a plurality of different applications the HHD is being used to perform as a function of information derived from obtained images. Thus, for instance, processor 21 may be programmed to recognize the edge of a box as a feature in an image and thereby determine that box dimension measurement should be performed. As another instance, processor 21 may be programmed to recognize any portion of a standard form and thereafter perform the form imaging process.
Referring now to
Thus, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. To apprise the public of the scope of this invention, the following claims are made:
This Application is a continuation of U.S. patent application Ser. No. 12/337,077 filed Dec. 17, 2008, now abandoned and a continuation-in-part of U.S. patent application Ser. No. 13/163,954 filed Jun. 20, 2011, which is a divisional of U.S. patent application Ser. No. 11/123,480 filed May 6, 2005, which issued as U.S. Pat. No. 7,963,448, and which is a continuation-in-part of U.S. patent application Ser. No. 11/020,640, filed on Dec. 22, 2004, now abandoned each of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
3868634 | Dolch | Feb 1975 | A |
3890597 | Hanchett | Jun 1975 | A |
4282425 | Chadima et al. | Aug 1981 | A |
4308455 | Bullis et al. | Dec 1981 | A |
4408344 | McWaters | Oct 1983 | A |
4421978 | Laurer et al. | Dec 1983 | A |
4542548 | Marazzini | Sep 1985 | A |
4782220 | Shuren | Nov 1988 | A |
4866784 | Barski | Sep 1989 | A |
4894523 | Chadima et al. | Jan 1990 | A |
4948955 | Lee et al. | Aug 1990 | A |
4973829 | Ishida et al. | Nov 1990 | A |
5028772 | Lapinski et al. | Jul 1991 | A |
5053609 | Priddy et al. | Oct 1991 | A |
5124537 | Chandler et al. | Jun 1992 | A |
5124538 | Lapinski et al. | Jun 1992 | A |
5120940 | Willsie | Sep 1992 | A |
5155343 | Chandler | Oct 1992 | A |
5163104 | Ghosh et al. | Nov 1992 | A |
5166830 | Ishida et al. | Nov 1992 | A |
5187355 | Chadima et al. | Feb 1993 | A |
5187356 | Chadima et al. | Feb 1993 | A |
5192856 | Schaham et al. | Mar 1993 | A |
5198650 | Nike et al. | Mar 1993 | A |
5262623 | Batterman et al. | Nov 1993 | A |
5262625 | Tom et al. | Nov 1993 | A |
5262626 | Goren et al. | Nov 1993 | A |
5262652 | Bright et al. | Nov 1993 | A |
5276315 | Surka | Jan 1994 | A |
5276316 | Blanford | Jan 1994 | A |
5278397 | Barkan et al. | Jan 1994 | A |
5286960 | Longacre et al. | Feb 1994 | A |
5291008 | Havens et al. | Mar 1994 | A |
5296690 | Chandler et al. | Mar 1994 | A |
5304786 | Pavlidis et al. | Apr 1994 | A |
5304787 | Wang | Apr 1994 | A |
5332892 | Li et al. | Jul 1994 | A |
5378883 | Batterman et al. | Jan 1995 | A |
5412197 | Smith | May 1995 | A |
5418862 | Zheng et al. | May 1995 | A |
5420409 | Longacre et al. | May 1995 | A |
5428212 | Tani et al. | Jun 1995 | A |
5446271 | Cherry et al. | Aug 1995 | A |
5455414 | Wang | Oct 1995 | A |
5461417 | White et al. | Oct 1995 | A |
5463214 | Longacre et al. | Oct 1995 | A |
5478999 | Figarella et al. | Dec 1995 | A |
5481098 | Davis et al. | Jan 1996 | A |
5483051 | Marchi | Jan 1996 | A |
5486689 | Ackley | Jan 1996 | A |
5487115 | Surka | Jan 1996 | A |
5507527 | Tomioka et al. | Apr 1996 | A |
5510603 | Hess et al. | Apr 1996 | A |
5514858 | Ackley | May 1996 | A |
5523552 | Shellhammer et al. | Jun 1996 | A |
5539191 | Ackley | Jul 1996 | A |
5550366 | Roustaei | Aug 1996 | A |
5557091 | Krummel | Sep 1996 | A |
5591956 | Longacre et al. | Jan 1997 | A |
5612524 | San't Anselmo et al. | Mar 1997 | A |
5627358 | Roustaei | May 1997 | A |
5635699 | Cherry et al. | Jun 1997 | A |
5646391 | Forbes et al. | Jul 1997 | A |
5657402 | Bender et al. | Aug 1997 | A |
5675137 | Van Haagen et al. | Oct 1997 | A |
5682030 | Kubon | Oct 1997 | A |
5691597 | Hara et al. | Nov 1997 | A |
5739518 | Wang | Jan 1998 | A |
5723853 | Longacre et al. | Mar 1998 | A |
5742037 | Scola et al. | Apr 1998 | A |
5744790 | Li | Apr 1998 | A |
5756981 | Roustaei et al. | May 1998 | A |
5767497 | Lei | Jun 1998 | A |
5767498 | Heske et al. | Jun 1998 | A |
5777309 | Maltsev et al. | Jul 1998 | A |
5780834 | Havens et al. | Jul 1998 | A |
5786586 | Pidhimy et al. | Jul 1998 | A |
5811784 | Tausch et al. | Sep 1998 | A |
5814827 | Katz | Sep 1998 | A |
5821520 | Mulla et al. | Oct 1998 | A |
5825006 | Longacre et al. | Oct 1998 | A |
5852288 | Nakazawa et al. | Dec 1998 | A |
5872354 | Hanson | Feb 1999 | A |
5877486 | Maltsev et al. | Mar 1999 | A |
5880451 | Smith et al. | Mar 1999 | A |
5889270 | Van Haagen et al. | Mar 1999 | A |
5902988 | Durbin | May 1999 | A |
5914476 | Gerst et al. | Jun 1999 | A |
5920060 | Marom | Jul 1999 | A |
5929418 | Ehrhart et al. | Jul 1999 | A |
5932862 | Hussey et al. | Aug 1999 | A |
5936224 | Shimizu et al. | Sep 1999 | A |
5949052 | Longacre et al. | Sep 1999 | A |
5992744 | Smith et al. | Nov 1999 | A |
6000612 | Ku | Dec 1999 | A |
6006990 | Ye et al. | Dec 1999 | A |
6021380 | Fredriksen et al. | Feb 2000 | A |
6021946 | Hippenmeyer et al. | Feb 2000 | A |
6053407 | Wang et al. | Apr 2000 | A |
6056198 | Rudeen | May 2000 | A |
6075883 | Stern et al. | Jun 2000 | A |
6075905 | Herman et al. | Jun 2000 | A |
6078251 | Landt et al. | Jun 2000 | A |
6082619 | Ma et al. | Jul 2000 | A |
6088482 | He et al. | Jul 2000 | A |
6095422 | Ogami | Aug 2000 | A |
6123261 | Roustaei | Sep 2000 | A |
6141033 | Michael et al. | Oct 2000 | A |
6152371 | Schwartz et al. | Nov 2000 | A |
6158661 | Chadima et al. | Dec 2000 | A |
6161760 | Marrs | Dec 2000 | A |
6176428 | Joseph et al. | Jan 2001 | B1 |
6189792 | Heske, III | Feb 2001 | B1 |
6206289 | Shame et al. | Mar 2001 | B1 |
6209789 | Amundsen et al. | Apr 2001 | B1 |
6234395 | Chadima et al. | May 2001 | B1 |
6234397 | He et al. | May 2001 | B1 |
6250551 | He et al. | Jun 2001 | B1 |
6289113 | McHugh et al. | Sep 2001 | B1 |
6298176 | Longacre, Jr. | Oct 2001 | B2 |
6334060 | Sham et al. | Dec 2001 | B1 |
6340119 | He et al. | Jan 2002 | B2 |
6371373 | Ma et al. | Apr 2002 | B1 |
6398113 | Heske, III | Jun 2002 | B1 |
6405925 | He et al. | Jun 2002 | B2 |
6408429 | Marrion et al. | Jun 2002 | B1 |
6446868 | Robertson et al. | Sep 2002 | B1 |
6454168 | Brandt et al. | Sep 2002 | B1 |
6490376 | Au et al. | Dec 2002 | B1 |
6491223 | Longacre et al. | Dec 2002 | B1 |
6505778 | Reddersen et al. | Jan 2003 | B1 |
6512714 | Hanzawa et al. | Jan 2003 | B2 |
6513714 | Davis et al. | Feb 2003 | B1 |
6513715 | Heske et al. | Feb 2003 | B2 |
6561427 | Davis et al. | May 2003 | B2 |
6629642 | Swartz et al. | Oct 2003 | B1 |
6677852 | Landt | Jan 2004 | B1 |
6681151 | Meinzimmer et al. | Jan 2004 | B1 |
6698656 | Parker et al. | Mar 2004 | B2 |
6728419 | Young | Apr 2004 | B1 |
6761316 | Bridgelall | Jul 2004 | B2 |
6765606 | Iddan | Jul 2004 | B1 |
6816063 | Kubler | Nov 2004 | B2 |
6913199 | He | Jul 2005 | B2 |
6919793 | Heinrich | Jul 2005 | B2 |
7044378 | Patel | May 2006 | B2 |
7059525 | Longacre et al. | Jun 2006 | B2 |
7061524 | Liu et al. | Jun 2006 | B2 |
7066388 | He | Jun 2006 | B2 |
7070099 | Patel | Jul 2006 | B2 |
7108184 | Mase et al. | Sep 2006 | B2 |
7121467 | Winter | Oct 2006 | B2 |
7175090 | Nadabar | Feb 2007 | B2 |
7181066 | Wagman | Feb 2007 | B1 |
7213759 | Reichenbach et al. | May 2007 | B2 |
7219841 | Biss | May 2007 | B2 |
7227628 | Sullivan et al. | Jun 2007 | B1 |
7460130 | Salganicoff | Dec 2008 | B2 |
7498566 | Kasper et al. | Mar 2009 | B2 |
7604174 | Gerst et al. | Oct 2009 | B2 |
7609846 | Smith et al. | Oct 2009 | B2 |
7614554 | Mott et al. | Nov 2009 | B2 |
7774075 | Lin | Aug 2010 | B2 |
8737721 | Arcas | May 2014 | B2 |
8858856 | Kozlak | Oct 2014 | B2 |
20010042065 | Yoshihiro et al. | Nov 2001 | A1 |
20010042789 | Krichever et al. | Nov 2001 | A1 |
20020044689 | Roustaei et al. | Apr 2002 | A1 |
20020171745 | Ehrhart | Nov 2002 | A1 |
20030006290 | Hussey et al. | Jan 2003 | A1 |
20030062418 | Barber et al. | Apr 2003 | A1 |
20030090586 | Jan et al. | May 2003 | A1 |
20030117511 | Belz et al. | Jun 2003 | A1 |
20030121978 | Rubin et al. | Jul 2003 | A1 |
20030195749 | Schuller | Oct 2003 | A1 |
20030201328 | Jam et al. | Oct 2003 | A1 |
20040026508 | Nakajima | Jan 2004 | A1 |
20040091255 | Chase et al. | May 2004 | A1 |
20050180804 | Andrew et al. | Aug 2005 | A1 |
20050194447 | He et al. | Sep 2005 | A1 |
20050263599 | Zhu et al. | Dec 2005 | A1 |
20050275831 | Silver | Dec 2005 | A1 |
20050275897 | Fan et al. | Dec 2005 | A1 |
20060022052 | Patel et al. | Feb 2006 | A1 |
20060027657 | Nunnink et al. | Feb 2006 | A1 |
20060027661 | Hosoi et al. | Feb 2006 | A1 |
20060050961 | Thiyagarajah | Mar 2006 | A1 |
20060131418 | Testa | Jun 2006 | A1 |
20060131419 | Nunnink | Jun 2006 | A1 |
20060132787 | Mestha et al. | Jun 2006 | A1 |
20060133757 | Nunnink | Jun 2006 | A1 |
20060283952 | Wang | Oct 2006 | A1 |
20060249581 | Smith et al. | Nov 2006 | A1 |
20060285135 | Mestha et al. | Dec 2006 | A1 |
20070181692 | Barkan et al. | Aug 2007 | A1 |
20080004822 | Nadabar et al. | Jan 2008 | A1 |
20080011855 | Nadabar | Jan 2008 | A1 |
20080019615 | Schnee et al. | Jan 2008 | A1 |
20080143838 | Nadabar | Jun 2008 | A1 |
20090090781 | Ye et al. | Apr 2009 | A1 |
20090121027 | Nadabar | May 2009 | A1 |
Number | Date | Country |
---|---|---|
10012715 | Sep 2000 | DE |
0571892 | Dec 1993 | EP |
0896290 | Oct 2004 | EP |
1469420 | Oct 2004 | EP |
1975849 | Jan 2008 | EP |
H10-40327 | Feb 1998 | JP |
2005-276119 | Oct 2005 | JP |
WO9613797 | May 1996 | WO |
WO200215120 | Feb 2002 | WO |
WO02075637 | Sep 2002 | WO |
WO03102859 | Dec 2003 | WO |
WO06052884 | May 2006 | WO |
20080118419 | Oct 2008 | WO |
20080118425 | Oct 2008 | WO |
WO20080118419 | Oct 2008 | WO |
WO20080118425 | Oct 2008 | WO |
Entry |
---|
US 6,768,414, 07/2004, Francis (withdrawn) |
Cognex Technology and Investment, “Written Opinion of the International Search Authority”, PCT/US2007/073575, Jan. 17, 2009. |
Stevenson, Rick , “Laser Marking Matrix Codes on PCBS”, Printed Circuit Design & Manufacture, Dec. 2005, 32, 34, 36. |
Cognex Technology and Investment, “International Preliminary Report on Patentability Chapter I (IB/373)”, PCT/US2007/073575, Jan. 20, 2009. |
Automatic I.D. News, Serving users of optical, magnetic, radio frequency, voice recognition systems, an HBJ Publication, Oct., 1986. |
“http://www.merriam-webster.com/dictionary/optimal”, p. 1, Oct. 27, 2008. |
“International Standard”, ISO/IEC 16022 First Edition May 1, 2000—Reference No. ISO/IEC 16022:2000(E), Information Technology—International symbology Specification—Data Matrix, May 1, 2000. |
Cognex Corporation, “Implementing Direct part Mark Identification: 10 Important Considerations”, ID Products, 2004, pp. 1-12. |
Rolls-Royce “Direct Part Marking”, Implementation Guide, Issue 1—Vcom 9897, Jun. 2004. |
Taniguchi, R-I et al., “A Distributed-Memory Multi-Thread Multiprocessor Architecture for Computer Vision and Image Processing: Optimized Version of AMP”, System Sciences, Los Alamitos, CA, 1993, 151-160. |
Wittenburg, J.P. et al., “A Multithreaded Architecture Approach to Parallel DSP's for High Performance Image Processing Applications”, Signal Processing Systems, Piscataway, NJ, 1999, 241-250. |
Number | Date | Country | |
---|---|---|---|
20120206592 A1 | Aug 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11123480 | May 2005 | US |
Child | 13163954 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12337077 | Dec 2008 | US |
Child | 13361528 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13163954 | Jun 2011 | US |
Child | 12337077 | US | |
Parent | 11020640 | Dec 2004 | US |
Child | 11123480 | US |