This application relates generally to optical systems and elements and more particularly to imaging systems, such as those useful for reading bar codes.
Common imagers, such as interline transfer charge-coupled devices (IT-CCDs) and certain complementary metal oxide semiconductor (CMOS) cameras, such as so-called 4-T pixel sensors (also known as frame-shuttered imagers), form an electronic image by simultaneously exposing all of its pixel elements to the object to be imaged. To image a moving object with such an imager, a frame shutter can be provided to briefly open and thereby to momentarily expose all of the imager's pixels at the same time, resulting in a “freeze frame” image. The time for which the shutter remains open—the frame exposure time—determines the maximum speed at which the object to be imaged can move while producing an adequate quality image. While mechanical shuttering can facilitate satisfactory imaging of fast moving objects, mechanical shuttering mechanisms adversely affect the complexity, cost, size, weight, power, reliability, and durability of an imaging system.
On the other hand, a rolling-reset imager, such as certain CMOS cameras, forms an image by sequentially activating individual rows of pixels within the pixel grid array, cycling through every row at a rate equal to the imager's frame rate. Each row is exposed for N units of time during each frame, where N specifies the exposure time. This is accomplished by enabling gathering of pixel values for a row N rows before that particular row is to be read out. The readout process clears the row. This method enables the imager to capture images over a wide range of intensity, as each row can be exposed for as little as one unit time and for as long as the entire frame time. An unfortunate consequence of this exposure method is that each row is exposed at a slightly different time. If N=1, for example, then each row exposes sequentially. If a longer exposure time (N>1) is implemented, then each row is staggered by 1/N of the total exposure time. If the imager is trying to capture a moving object, this staggered exposure causes motion artifacts. For example, if a thin vertically oriented object, such as a pencil, moves from left to right in front of such an imager at a sufficiently high speed, the image will be captured as a diagonally oriented pencil, due to the effects of staggered exposure time.
Rolling-reset CMOS imagers are generally less expensive than CCD imagers due to the relative ease of the CMOS process compared to the CCD process, and rolling-reset CMOS imagers are generally less expensive than frame-shuttered CMOS imagers since they typically have fewer transistors per pixel. However, it is challenging to operate a rolling-rest imager in a freeze-frame mode of operation. In order for all pixels to get exposed at the same time, each row must be set up to expose for the entire frame time. This large exposure time causes considerable motion blur effects. A mechanical shutter can be used in conjunction with a full frame exposure, to limit the intrusion of light to a narrow time period, corresponding to the desired exposure time. However, a mechanical shutter can be bulky, expensive, and less reliable than all-electronic means.
According to one embodiment, an imaging system comprises a rolling-reset imager that forms an image of an object, a light source illuminating the object, and an optical filter disposed between the object and the rolling-reset imager. The pulsed light from the light source has an illumination frequency spectrum and an illumination pulse width defining an effective exposure time for forming the image of the object. The optical filter has a frequency pass band permitting transmission of a significant portion of the illumination frequency spectrum while at least approximately inhibiting transmission of at least some light having frequencies outside the illumination frequency spectrum.
According to another embodiment, a method illuminates an object with illumination light in a given frequency range, so that the illumination light reflects from the object along with background ambient light. The method filters the reflected light so as to attenuate at least some of the background ambient light by a greater attenuation factor than the illumination light. The method forms a pixelized image based on the filtered light on a rolling-reset basis.
Additional details concerning the construction and operation of particular embodiments are set forth in the following sections with reference to the below-listed drawings.
With reference to the above-listed drawings, this section describes particular embodiments and their detailed construction and operation. As one skilled in the art will appreciate in light of this disclosure, certain embodiments are capable of achieving certain advantages over the known prior art, including some or all of the following: (1) enabling the utilization of more economical rolling-reset imagers, such as CMOS rolling-reset imagers; (2) elimination of the need to use a physical shuttering mechanism; (3) suppression of background illumination; and (4) avoidance of visible flickering from the illumination source, which can be discernable and annoying to human observers. These and other advantages of various embodiments will be apparent upon reading the remainder of this section.
Placed in front of the imager 110 is a lens 140, which provides a field of view 150, in which is an object 160 to be imaged. In one use of the imaging system 100, the object 160 is an optical code, such as a bar code. Disposed between the lens 140 and the object 160 is an optical filter 170. An enclosure 180 covers the imager 110 and the lens 140 except where the optical filter 170 is located across the field of view 150, so that all light reaching the imager 110 passes through the optical filter 170, preferably after reflecting off the object 160.
The optical filter 170 ideally has a lowpass, highpass, or bandpass frequency response with a pass band matching as nearly as possible the spectrum of the light generated by the light sources 130. In this way, the object 160 can be imaged by the imager 110 when the light sources 130 are illuminating the object 160 but not when the light sources 130 are not illuminating the object 160. Other light, such as background ambient light, having frequencies outside of the pass band of the optical filter 170, is desirably attenuated by the optical filter 170, preferably to an extent that such other light does not appreciably register at the imager 110. For example, if illumination sources 130 are near-IR LEDs emitting at a wavelength of 850 nm, and the background ambient illumination is fluorescent lighting, having little emission in the near-IR range, useful versions of the optical filter 170 include WRATTEN® #87 IR filter, available from Eastman Kodak Co., Rochester, N.Y.; CR-39® IR longpass filter available from Opticast, Inc., Findlay, Ohio; as well as R-72 IR pass filter, RG715 IR longpass filter, and RT830 bandpass filter, available from various sources such as Edmund Industrial Optics, Barrington, N.J., which passes wavelengths longer than 700 nm with high transmittance.
In use, the imaging system 100 can form freeze-frame images of the object 160 as it moves across the field of view 150. In this mode of operation, the light sources 130 are turned on for a fraction of the imager 110 frame time. The rows of the imager 110 are set to expose for an entire frame time, so that all rows are exposing during the time of the illumination pulse. For bar code reading, the exposure time per frame (and thus the pulse width of the illumination) should satisfy the following relation: TEXP=U/V, where U is the (minimum) unit width of a bar or space and V is the maximum velocity at which the bar code can move across the field of view 150.
The light sources 130 can be pulsed or strobed periodically with a pulse rate and duty cycle set to match a desired exposure time. The frame rate of the imager 110 and strobing frequency or pulse rate can be set, within the limits of the imager 110, to satisfy the following relation: FRMIN=V/(WF−WO), where FRMIN is the minimum frame rate, V is the velocity at which the bar code moves across the field of view 150, WF is the width of the field of view 150 in the direction of the velocity, and WO is the width of the object 160 in the direction of the velocity. Satisfying that relation ensures that the entire object 160 is seen by the imager 110 when it moves through the field of view 150. If the light from the light sources 130 is not visible, then the frame rate can be quite low without generating annoying visible flicker. Visible light pulses at a frequency of about 50 Hertz (Hz) or less can cause a flicker effect that is distracting to the human eye. The use of near-IR illumination is advantageous for another reason as well—namely, that near-IR LEDs are capable of handling significant pulse overdrive currents at low duty cycles, enabling bright illumination for the imager 110. The relatively low frame rate needed to ensure capture of the object 160 allows the illumination LEDs to be pulsed at a very low duty cycle. For example, if the width of field WF is equal to 5 inches, the width of object WO is equal to 1 inch, and the maximum object velocity is 50 inches per second, then the minimum frame rate FRMIN is 12.5 frames per second. If the object is a barcode with a minimum element width of 10 mils (0.010 inches), then the maximum exposure time (and therefore LED pulse width) is 200 μs (microseconds). The duty cycle of the LED would then be 200 μs×12.5 Hz or 0.25%, which is quite small. An LED that is rated at 50 mA (milliamps) of continuous duty cycle current may be capable of 1 amp of current when pulsed at this low duty cycle. This increases the effective illumination on the target 160 by a factor of 20.
The optical filter 170 transmits with a relatively high transmittance the illumination generated by the light sources 130 and reflected off the object 160 while transmitting light of other frequencies with a relatively low transmittance. When the light sources 130 operate in the near-IR frequency range and the optical filter 170 has a near-IR pass band, the background ambient lighting is preferably provided by fluorescent lamps, which generate little near-IR energy. In that case, the imaging system 110 effectively discriminates illumination generated by the light sources 130 from background ambient light.
The imaging system 100 is useful in a wide variety of imaging applications. One example of an imaging application suitable for use of the imaging system 100 is reading optical codes, such as a bar code 260. One particular example of a bar code reader utilizing the principles of the imaging system 100 is the bar code imaging system 200 depicted in
The lens assembly 240 preferably has a generalized axicon focus function, as it introduces a rather large amount of spherical aberration. The signal processor 290 is designed to cancel or compensate partially or fully for that aberration or blurriness caused by the lens assembly 240. The signal processor 290 preferably comprises a virtual scan line extraction module 292, a nonuniform pixel gain 294, and an equalizer 296. The virtual scan line extraction module 292, which is optional, reads and/or assembles samples or pixels from the imager 130 lying along one or more lines (i.e., “virtual scan lines”) across the image at arbitrary angles or in another desired scan patterns. The nonuniform pixel gain 294, although also optional, can be advantageous in that it can suppress pixel nonuniformity that arises from such causes as differences in gain from pixel to pixel in the imager 110. The nonuniform pixel gain 294 is preferably an array of scale factors that are multiplied by the imager's intensity values on a pixel-by-pixel basis. The equalizer 296 is a filter, such as a digital finite impulse response (FIR) filter, whose transfer function preferably approximates the inverse of the modulation transfer function (MTF) of the lens assembly 240, so as to cancel or compensate for the blurriness or aberration caused by the lens assembly 240. Further details about the signal processor 290 are included in the above-referenced U.S. patent application Ser. No. 11/045,213.
The methods and systems illustrated and described herein can exist in a variety of forms both active and inactive. For example, the signal processor 290 and the methods 300 and 400 can exist as one or more software programs comprised of program instructions in source code, object code, executable code or other formats. Any of the above formats can be embodied on a computer-readable medium, which include storage devices and signals, in compressed or uncompressed form. Exemplary computer-readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), flash memory and magnetic or optical disks or tapes. Exemplary computer-readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running a computer program can be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of software on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer-readable medium. The same is true of computer networks in general.
The terms and descriptions used above are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations can be made to the details of the above-described embodiments without departing from the underlying principles of the invention. The scope of the invention should therefore be determined only by the following claims—and their equivalents—in which all terms are to be understood in their broadest reasonable sense unless otherwise indicated.
Number | Name | Date | Kind |
---|---|---|---|
3614310 | Korpel | Oct 1971 | A |
4082431 | Ward | Apr 1978 | A |
4275454 | Klooster | Jun 1981 | A |
4308521 | Casasent et al. | Dec 1981 | A |
4804249 | Reynolds et al. | Feb 1989 | A |
4864249 | Reiffin | Sep 1989 | A |
5003166 | Girod | Mar 1991 | A |
5010412 | Garriss | Apr 1991 | A |
5080456 | Katz et al. | Jan 1992 | A |
5142413 | Kelly | Aug 1992 | A |
5164584 | Wike et al. | Nov 1992 | A |
5278397 | Barkan et al. | Jan 1994 | A |
5307175 | Seachman | Apr 1994 | A |
5315095 | Marom et al. | May 1994 | A |
5331143 | Marom et al. | Jul 1994 | A |
5332892 | Li et al. | Jul 1994 | A |
5347121 | Rudeen | Sep 1994 | A |
5352922 | Barkan et al. | Oct 1994 | A |
5354977 | Roustaei | Oct 1994 | A |
5371361 | Arends et al. | Dec 1994 | A |
5386105 | Quinn et al. | Jan 1995 | A |
5418356 | Takano | May 1995 | A |
5422472 | Tavislan et al. | Jun 1995 | A |
5426521 | Chen et al. | Jun 1995 | A |
5438187 | Reddersen et al. | Aug 1995 | A |
5446271 | Cherry et al. | Aug 1995 | A |
5475208 | Marom | Dec 1995 | A |
5486688 | Iima et al. | Jan 1996 | A |
5506392 | Barkan et al. | Apr 1996 | A |
5583342 | Ichie | Dec 1996 | A |
5623137 | Powers et al. | Apr 1997 | A |
5625495 | Moskovich | Apr 1997 | A |
5635699 | Cherry et al. | Jun 1997 | A |
5646390 | Wang et al. | Jul 1997 | A |
5646391 | Forbes et al. | Jul 1997 | A |
5689104 | Suzuki et al. | Nov 1997 | A |
5714750 | Eastman et al. | Feb 1998 | A |
5717194 | Forbes et al. | Feb 1998 | A |
5745176 | Lebens | Apr 1998 | A |
5748371 | Cathey et al. | May 1998 | A |
5756981 | Roustaei et al. | May 1998 | A |
5770847 | Olmstead | Jun 1998 | A |
5814803 | Olmstead et al. | Sep 1998 | A |
5825044 | Allen et al. | Oct 1998 | A |
5945670 | Rudeen | Aug 1999 | A |
6011660 | Nagahara | Jan 2000 | A |
6042012 | Olmstead et al. | Mar 2000 | A |
6056198 | Rudeen et al. | May 2000 | A |
6057971 | Mihara | May 2000 | A |
6066857 | Fantone et al. | May 2000 | A |
6069738 | Cathey et al. | May 2000 | A |
6073851 | Olmstead et al. | Jun 2000 | A |
6097856 | Hammond | Aug 2000 | A |
6098887 | Figarella et al. | Aug 2000 | A |
6142376 | Cherry et al. | Nov 2000 | A |
6147616 | Ori | Nov 2000 | A |
6152371 | Schwartz et al. | Nov 2000 | A |
6164540 | Bridgelall et al. | Dec 2000 | A |
6184534 | Stephany et al. | Feb 2001 | B1 |
6209788 | Bridgelall et al. | Apr 2001 | B1 |
6236737 | Gregson et al. | May 2001 | B1 |
6256067 | Yamada | Jul 2001 | B1 |
6276606 | Liou et al. | Aug 2001 | B1 |
6290135 | Acosta et al. | Sep 2001 | B1 |
6347163 | Roustaei | Feb 2002 | B2 |
6347742 | Winarski et al. | Feb 2002 | B2 |
6493061 | Arita et al. | Dec 2002 | B1 |
6523750 | Dickson et al. | Feb 2003 | B1 |
6536898 | Cathey et al. | Mar 2003 | B1 |
6540145 | Gurevich et al. | Apr 2003 | B2 |
6545714 | Takada | Apr 2003 | B1 |
6568594 | Hendriks et al. | May 2003 | B1 |
6616046 | Barkan et al. | Sep 2003 | B1 |
6633433 | Bergstein et al. | Oct 2003 | B2 |
6651886 | Gurevich et al. | Nov 2003 | B2 |
6661458 | Takada et al. | Dec 2003 | B1 |
6674473 | Takada | Jan 2004 | B1 |
6689998 | Bremer | Feb 2004 | B1 |
6732930 | Massieu et al. | May 2004 | B2 |
7086595 | Zhu et al. | Aug 2006 | B2 |
7204418 | Joseph et al. | Apr 2007 | B2 |
7213762 | Zhu et al. | May 2007 | B2 |
20020070342 | Berenz et al. | Jun 2002 | A1 |
20020134835 | Kennedy | Sep 2002 | A1 |
20020148900 | Gurevich et al. | Oct 2002 | A1 |
20020149693 | Tantalo et al. | Oct 2002 | A1 |
20020154415 | Miyauchi et al. | Oct 2002 | A1 |
20030043463 | Li et al. | Mar 2003 | A1 |
20030107658 | Huang et al. | Jun 2003 | A1 |
20040136069 | Li et al. | Jul 2004 | A1 |
20050122422 | Kent et al. | Jun 2005 | A1 |
20050134725 | Uenaka et al. | Jun 2005 | A1 |
20060113386 | Olmstead | Jun 2006 | A1 |
Number | Date | Country |
---|---|---|
56050469 | May 1981 | JP |
Number | Date | Country | |
---|---|---|---|
20060164541 A1 | Jul 2006 | US |