The present invention relates to an imaging-based bar code reader for identifying non-barcoded products.
A bar code is a coded pattern of graphical indicia comprised of a series of bars and spaces of varying widths, the bars and spaces having differing light reflecting characteristics. The pattern of the bars and spaces encode information. Bar code may be one dimensional (e.g., UPC bar code) or two dimensional (e.g., DataMatrix bar code). Systems that read, that is, image and decode bar codes employing imaging camera systems are typically referred to as imaging-based bar code readers or bar code scanners.
Imaging-based bar code readers may be portable or stationary. A portable bar code reader is one that is adapted to be held in a user's hand and moved with respect to a target indicia, such as a target bar code, to be read, that is, imaged and decoded. Stationary bar code readers are mounted in a fixed position, for example, relative to a point-of-sales counter. Target objects, e.g., a product package that includes a target bar code, are moved or swiped past one of the one or more transparent windows and thereby pass within a field of view of the stationary bar code readers. The bar code reader typically provides an audible and/or visual signal to indicate the target bar code has been successfully imaged and decoded.
A typical example where a stationary imaging-based bar code reader would be utilized includes a point of sale counter/cash register where customers pay for their purchases. The reader is typically enclosed in a housing that is installed in the counter and normally includes a vertically oriented transparent window and/or a horizontally oriented transparent window, either of which may be used for reading the target bar code affixed to the target object, i.e., the product or product packaging for the product having the target bar code imprinted or affixed to it. The sales person (or customer in the case of self-service check out) sequentially presents each target object's bar code either to the vertically oriented window or the horizontally oriented window, whichever is more convenient given the specific size and shape of the target object and the position of the bar code on the target object.
A stationary imaging-based bar code reader that has a plurality of imaging cameras can be referred to as a multi-camera imaging-based scanner or bar code reader. In a multi-camera imaging reader, each camera system typically is positioned behind one of the plurality of transparent windows such that it has a different field of view from every other camera system. While the fields of view may overlap to some degree, the effective or total field of view of the reader is increased by adding additional camera systems. Hence, the desirability of multi-camera readers as compared to single camera readers which have a smaller effective field of view and require presentation of a target bar code to the reader in a very limited orientation to obtain a successful, decodable image, that is, an image of the target bar code that is decodable.
U.S. Pat. No. 5,717,195 to Feng et al concerns an “Imaging Based Slot Datform Reader” having a mirror, camera assembly with photosensor array and a illumination system. The disclosure of this patent is incorporated herein by reference.
Barcode scanners using imagers have become common in many retail applications. In theory, an imaging barcode reader could capture an image of an item such as a screw, bolt, washer etc to identify the product. Merely capturing an image is not enough to allow the particular part to be identified. Identifying a screw requires a measurement of the length and diameter of the screw's threaded portion. The identification also requires thread pitch and an identification of the screw head. Both the shape and the size of the features must be obtained to accurately identify the screw. Similar measurements are needed to identify a bolt, washer, nail etc.
Some items sold in retail stores do not have labels bearing barcodes so they cannot be scanned at the cash register. Examples of such products are screws, nails, nuts and washers all of which can be purchased at home improvement stores. When these items are purchased the sales clerk must identify the item, typically by comparing it to a picture located near the cash register and then manually entering a SKU number via the cash register keyboard. This is time consuming and there is a good chance the wrong SKU number will be entered either because the wrong item is identified or the clerk makes an error in entering the SKU into the cash register. Other items such as fruit or vegetables are sold in supermarkets that do not have bar codes but may have identifying indicia which must be manually entered by the clerk.
The present disclosure concerns a bar code reader that can both interpret bar codes and determine a feature of a target object not having a bar code affixed thereto. The bar code reader includes a housing including one or more transparent windows and defining a housing interior region. As a target object is swiped or presented in relation to the transparent windows an image of the target object is captured.
A camera has an image capture sensor array positioned within the housing interior region for capturing an image of a bar code within a camera field of view. An image processing system has a processor for decoding a bar code carried by the target object. If the target object has no bar code, the image processing system determines a feature of the target object from images captured by the imaging system.
These and other objects, advantages, and features of the exemplary embodiment of the invention are described in detail in conjunction with the accompanying drawings.
In the exemplary embodiment, multiple cameras C1-C6 are mounted to a printed circuit board 22 inside the housing and each camera defines a two-dimensional field-of-view FV1, FV2, FV3, FV4, FV5, FV6. Positioned behind and adjacent to the windows H, V are reflective mirrors that define a given camera field-of-view such that the respective fields-of-view FV1-FV6 pass from the housing 20 through the windows to create an effective total field-of-view (TFV) for the reader 10 in a region of the windows H, V, outside the housing 20. Because each camera C1-C6 has an effective working range WR (shown schematically in
In accordance with one use, either a sales person or a customer will present or swipe a product or target object 32 selected for purchase to the housing 20.
Imaging Optics
Each camera assembly C1-C6 of the imaging system 12 captures a series of image frames of its respective field-of-view FV1-FV6. The series of image frames for each camera assembly C1-C6 is shown schematically as IF1, IF2, IF3, IF4, IF5, IF6 in
Digital signals 35 that make up the frames are coupled to a bus interface 42, where the signals are multiplexed by a multiplexer 43 and then communicated to a memory 44 in an organized fashion so that the processor knows which image representation belong to a given camera.
The image processors 15 access the image frames IF1-IF6 from memory 44 and search for image frames that include an imaged target bar code 30′. If the imaged target bar code 30′ is present and decodable in one or more image frames, the decoder 16 attempts to decode the imaged target bar code 30′ using one or more of the image frames having the imaged target bar code 30′ or a portion thereof. If no bar code is present, the image processors look for items such as the screw 230 shown in
Each camera includes a charged coupled device (CCD), a complementary metal oxide semiconductor (CMOS), or other imaging pixel array, operating under the control of the imaging processing system 40. In one exemplary embodiment, the sensor array comprises a two-dimensional (2D) CMOS array with a typical size of the pixel array being on the order of 752×480 pixels. The illumination-receiving pixels of the sensor array define a sensor array surface secured to a printed circuit board for stability. The sensor array surface is substantially perpendicular to an optical axis of the imaging lens assembly, that is, a z axis that is perpendicular to the sensor array surface would be substantially parallel to the optical axis of the focusing lens. The pixels of the sensor array surface are disposed in an orthogonal arrangement of rows and columns of pixels.
The reader circuitry 11 includes imaging system 12, the memory 44 and a power supply 11a. The power supply 11a is electrically coupled to and provides power to the circuitry 11 of the reader. Optionally, the reader 10 may include an illumination system 60 (shown schematically in
Decoding Images
As is best seen in
The decoding circuitry 14 performs a process 110 on selected image frames by getting an image 120 from memory and determining 122 if the image has a bar code. If so the processor 15 attempts to decode 124 any decodable image within the image frames, e.g., the imaged target bar code 30′. If the decoding is successful, decoded data 56, representative of the data/information coded in the target bar code 30 is then output 126 via a data output port 58 and/or displayed to a user of the reader 10 via a display 59. Upon achieving a good read of the target bar code 30, that is, the bar code 30 was successfully imaged and decoded, a speaker 34b and/or an indicator LED 34a is activated by the bar code reader circuitry 11 to indicate to the user that the target bar code 30 has been successfully read.
To acquire data from images of items that do not have bar codes with a barcode reader 10 requires a knowledge of the pixel per inch at a location having a fixed distance to the object or item that is imaged. For example a multiple camera reader has a window H upon which objects, such as a screw 230, being imaged can be positioned during imaging. One goal of use of the reader is to determine the pitch, length and type of screw based on length dimensions of the features of the screw. The reader can also distinguish, for example, between various head shapes (flat head versus rounded etc) by comparing the imaged shapes with a database of shapes stored in memory of the reader or stored in a host computer with which the reader communicates. The reader 10 looks at the object through the window H on which it rests so the distance from the imager or camera is known and measurements of the item are based on a fixed relation of pixels per inch of the sensor array that captures the image.
Another way to use an imaging based bar code reader is to image the item simultaneously with a target such a grid 231 (
The support surface 232 shown in
The size of the object being imaged can be compared to reference marks on the grid. For example, if the marks are an inch apart, the processor can count pixels to see how many pixels appear between the marks, establishing a pixels per inch reference value. Such calibration would typically be done during manufacture of the reader.
Pixels distances across various dimensions of the object being imaged are subsequently determined during use of the reader to measure those features of the object of concern. Different classes of objects are identified by different features. A generic class type (screw, bolt, nut etc) is entered 130 at a user input to the reader. The reader determines 132 a specific feature of the object based on knowledge conveyed by the generic class object. Alternately, a variety of different classes are stored in the reader and the reader processor 15 determines by means of pattern matching the image of the object what generic class the object falls into. Once all features of an item (possibly including its weight) have been determined the processor identifies 134 the item and optionally displays the information on a visual display.
In a stationary reader such as the one shown in
Another possibility is to have reference marks 242 inside the scanner near an edge of the window, within the camera field of view. In this instance fold mirrors are designed to extend the visible camera field of view outside the borders of the clear window aperture such as the aperture defined by the window H. Calibration marks around the window or on a calibration target placed on the window allow the reader to determine pixels per inch anywhere on the window even if the window is tilted with respect to the camera.
In the case of individual fruit or vegetables which have an identification stick but not a bar code, the number can be scanned and interpreted by the imager using an optical character recognition technology. Use of such optical character recognition can eliminate the need for manual data entry of such codes and instead allow the reader to determine the code and correlate the code with the product.
In one embodiment, the reader 10 is used in conjunction with a scale 234 that weighs items sitting on the horizontal window H. The scale can be used to confirm that the item has been correctly identified. For example, once a screw has been measured by the processors 15, and an identification has been tentatively made, the weight of the screw, as measured by the scale during the optical measurement process, is compared to a database of weights of various screws, nuts, washers, etc that are sold in a given store location. If the measured weight and the weight in the database match, it is highly likely that the identification is correct. Once an individual item has been identified, the scale count multiple items, if they are all placed on the scale by dividing the total weight by the unit weight determined for a single item. The customer can be charged for the correct number of screws without the clerk counting them.
The scale can also help distinguish between washers of different thickness, which will have different weights. Different thicknesses will not be distinguishable by the processors. Weights will also help distinguish screws, nuts and washers made of different materials. Stores may sell both metal and plastic screws, nuts and washers. The color (if a color reader is used) or gray scale value (with a monochrome sensor) can also help distinguish different finishes or materials. If a hand held scanner is being used a scale can be placed below or near the reader where the items being identified are to be placed. The weight database can be automatically created when new items are added to the store's inventory by storing the measured weight when a scanner sees a new item. Weights of several items can be averaged for more accurate standard weight for each item.
Camera Field of View
Referring now to
The camera or imager C3 and its associated optics are symmetrical with respect to a center line of the reader to imager C1. Camera C3 faces generally vertically upward toward an incline folding mirror M3A substantially directly overhead at a right side of the horizontal window H. The folding mirror M3A faces another inclined narrow folding mirror M3B located at a left side of the horizontal window H. The folding mirror M3B faces still another inclined wide folding mirror M3C adjacent the mirror M3A. The folding mirror M3C faces out through the generally horizontal window H toward the left side of the dual window reader.
Imager or camera C2 and its associated optics are located between imagers C1 and C3 and their associated optics. Imager C2 faces generally vertically upward toward an inclined folding mirror M2A substantially directly overhead generally centrally of the horizontal window H at one end thereof. The folding mirror M2A faces another inclined folding mirror M2B located at the opposite end of the horizontal window H. The folding mirror M2B faces out through the window H in an upward direction toward the vertical window V in the housing 20.
As illustrated in
In
In
Features and functions of the fold mirrors shown in the figures are described in further detail in U.S. patent application Ser. No. 12/245,111 to Drzymala et al filed Oct. 3, 2008 which is incorporated herein by reference. When a mirror is used in an optical layout to reflect the reader field of view to another direction, the mirror may be thought of as an aperture (an aperture is a defined as a hole or an opening through which light is admitted). The depictions in the copending application show optical layouts which represent one or more fold mirrors that achieve long path lengths within the reader housing. When the mirror clips or defines the imaging or camera field of view it is referred to as vignetting. When the mirror clips extraneous or unneeded light from a source such as a light emitting diode, it is commonly referred to as baffling. In the Figures three fold mirrors are used to define a given field of view. Other numbers of mirrors, however, could be used to direct light to a field of view outside the housing.
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
3211046 | Kennedy | Oct 1965 | A |
3947816 | Rabedeau | Mar 1976 | A |
4613895 | Burkey et al. | Sep 1986 | A |
4794239 | Allais | Dec 1988 | A |
5058188 | Yoneda | Oct 1991 | A |
5059779 | Krichever et al. | Oct 1991 | A |
5124539 | Krichever et al. | Jun 1992 | A |
5200599 | Krichever et al. | Apr 1993 | A |
5304786 | Pavlidis et al. | Apr 1994 | A |
5559562 | Ferster | Sep 1996 | A |
5703349 | Meyerson et al. | Dec 1997 | A |
5705802 | Bobba et al. | Jan 1998 | A |
5717195 | Feng et al. | Feb 1998 | A |
5801370 | Katoh et al. | Sep 1998 | A |
5936218 | Ohkawa et al. | Aug 1999 | A |
5987428 | Walter | Nov 1999 | A |
6006990 | Ye et al. | Dec 1999 | A |
6141062 | Hall et al. | Oct 2000 | A |
6330973 | Bridgelall et al. | Dec 2001 | B1 |
6336587 | He et al. | Jan 2002 | B1 |
6340114 | Correa et al. | Jan 2002 | B1 |
6392688 | Barman et al. | May 2002 | B1 |
6538243 | Bohn et al. | Mar 2003 | B1 |
6629642 | Swartz et al. | Oct 2003 | B1 |
6899272 | Krichever et al. | May 2005 | B2 |
6924807 | Ebihara et al. | Aug 2005 | B2 |
6951304 | Good | Oct 2005 | B2 |
6991169 | Bobba et al. | Jan 2006 | B2 |
7076097 | Kondo et al. | Jul 2006 | B2 |
7116353 | Hobson et al. | Oct 2006 | B2 |
7191947 | Kahn et al. | Mar 2007 | B2 |
7219831 | Murata | May 2007 | B2 |
7280124 | Laufer et al. | Oct 2007 | B2 |
7416119 | Inderrieden | Aug 2008 | B1 |
7430682 | Carlson et al. | Sep 2008 | B2 |
7475823 | Brock | Jan 2009 | B2 |
7533819 | Barkan et al. | May 2009 | B2 |
7543747 | Ehrhart | Jun 2009 | B2 |
7619527 | Friend et al. | Nov 2009 | B2 |
7757955 | Barkan et al. | Jul 2010 | B2 |
8079523 | Barkan et al. | Dec 2011 | B2 |
20010042789 | Krichever et al. | Nov 2001 | A1 |
20020138374 | Jennings et al. | Sep 2002 | A1 |
20020162887 | Detwiler | Nov 2002 | A1 |
20030029915 | Barkan et al. | Feb 2003 | A1 |
20030078849 | Snyder | Apr 2003 | A1 |
20030082505 | Frohlich et al. | May 2003 | A1 |
20030102377 | Good | Jun 2003 | A1 |
20030122093 | Schauer | Jul 2003 | A1 |
20030213841 | Josephson et al. | Nov 2003 | A1 |
20040146211 | Knapp et al. | Jul 2004 | A1 |
20040189472 | Acosta et al. | Sep 2004 | A1 |
20050098633 | Poloniewicz et al. | May 2005 | A1 |
20050259746 | Shinde et al. | Nov 2005 | A1 |
20060022051 | Patel et al. | Feb 2006 | A1 |
20060043193 | Brock | Mar 2006 | A1 |
20060118628 | He et al. | Jun 2006 | A1 |
20060180670 | Acosta et al. | Aug 2006 | A1 |
20070001013 | Check et al. | Jan 2007 | A1 |
20070079029 | Carlson et al. | Apr 2007 | A1 |
20080011846 | Cato | Jan 2008 | A1 |
20080122969 | Alakarhu | May 2008 | A1 |
20080128509 | Knowles et al. | Jun 2008 | A1 |
20080296382 | Connell, II et al. | Dec 2008 | A1 |
20090026271 | Drzymala et al. | Jan 2009 | A1 |
20090084854 | Carlson et al. | Apr 2009 | A1 |
20100102129 | Drzymala et al. | Apr 2010 | A1 |
20100165160 | Olmstead et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1006475 | Jun 2000 | EP |
1223535 | Jun 2009 | EP |
0182214 | Nov 2001 | WO |
2009006419 | Jan 2009 | WO |
2010053682 | May 2010 | WO |
Entry |
---|
International Search Report and Written Opinion dated Jan. 28, 2010 in related case PCT/US2009/061838. |
International Preliminary Report on Patentability and Written Opinion for International Application No. PCT/US2009/061838 mailed on May 19, 2011. |
International Search Report and Written Opinion for International Patent Application No. PCT/US2009/067816 mailed on Mar. 26, 2010. |
Non Final Office Action mailed on May 2, 2011 in U.S. Appl. No. 12/334,830, Edward D. Barkan, filed on Dec. 15, 2008. |
Notice of Allowance mailed on Oct. 17, 2011 in U.S. Appl. No. 12/334,830, Edward D. Barkan, filed on Dec. 15, 2008. |
International Preliminary Report on Patentability and Written Opinion for International Application No. PCT/US2009/067816 mailed on Jun. 30, 2011. |
International Search Report and Written Opinion for International Patent Application No. PCT/US2009/061218 mailed on Jan. 25, 2010. |
Non Final Office Action mailed on Sep. 30, 2010 in U.S. Appl. No. 12/260,168, Mark Drzymala, filed on Oct. 29, 2008. |
Final Office Action mailed on May 11, 2011 in U.S. Appl. No. 12/260,168, Mark Drzymala, filed on Oct. 29, 2008. |
International Preliminary Report on Patentability and Written Opinion for International Patent application No. PCT/US2009/061218 mailed on May 12, 2011. |
Notice of Allowance mailed on Apr. 19, 2010 in U.S. Appl. No. 12/112,275, Edward D. Barkan, filed on Apr. 30, 2008. |
Non Final Office Action mailed Sep. 7, 2011 in related U.S. Appl. No. 12/241,153, Mark Drzymala, filed on Sep. 30, 2008. |
International Search Report and Written Opinion for counterpart International Application No. PCT/US2008/068810 mailed on Feb. 10, 2008. |
Notice of Allowance mailed Jun. 30, 2010, in U.S. Appl. No. 11/823,818, Edward D. Barkan, filed on Jun. 28, 2007. |
Notice of Allowance mailed Jun. 1, 2010, in U.S. Appl. No. 11/823,818, Edward Barkan, filed on Jun. 28, 2007. |
Non Final Office Action mailed Jan. 20, 2010, in U.S. Appl. No. 11/823,818, Edward Barkan., filed on Jun. 28, 2007. |
Notice of Allowance mailed Sep. 9, 2011, in counterpart U.S. Appl. No. 12/315,235, James Giebel, filed on Dec. 1, 2008. |
Notice of Allowance mailed Jun. 17, 2011, in counterpart U.S. Appl. No. 12/315,235, James Giebel, filed on Dec. 1, 2008. |
Australian Office Action mailed Nov. 2, 2010, in Australia for counterpart Application No. 2008272946. |
Non Final Office Action mailed Oct. 31, 2011, in counterpart U.S. Appl. No. 12/245,111, Mark Drzymala, filed on Oct. 3, 2008. |
International Preliminary Report on Patentability for and Written Opinion International Patent Application No. PCT/US2008/068810 mailed on Jan. 14, 2010. |
Final Office Action mailed May 23, 2012 in counterpart U.S. Appl. No. 12/241,153, Mark Drzymala, filed Sep. 30, 2008. |
Number | Date | Country | |
---|---|---|---|
20100116887 A1 | May 2010 | US |