The invention relates to imaging and processing of target images and more particularly to the processing of image data from multiple sources.
Various electro-optical systems have been developed for reading optical indicia, such as bar codes. A bar code is a coded pattern of graphical indicia comprised of a series of bars and spaces of varying widths, the bars and spaces having differing light reflecting characteristics. Systems that read and decode bar codes employing CCD or CMOS-based imaging systems are typically referred to as imaging-based bar code readers or bar code scanners.
The imaging bar code reader includes an imaging and decoding system including an imaging system for generating an image of a target bar code and decoding circuitry for decoding the imaged target bar code. Imaging systems include CCD arrays, CMOS arrays, or other imaging pixel arrays having a plurality of photosensitive elements or pixels. Light reflected from a target image, e.g., a target bar code is focused through a lens of the imaging system onto the pixel array. Output signals from the pixels of the pixel array are digitized by an analog-to-digital converter. Decoding circuitry of the imaging and decoding system processes the digitized signals and attempts to decode the imaged bar code.
Many modern imaging systems include a computing device that interfaces with video data. This interface is usually accomplished using a direct-memory-access (DMA) method or through a special hardware interface, such as a video port, that is part of the processor of the computing device. The object of these two approaches is to allow the computing device to process a relatively large amount of video data without overloading the computing bandwidth of the processor or the data access bandwidth of the memory bus.
Some imaging systems are designed to process data from more than one camera. Other systems process data from one or more cameras as well as packets of data about, for example, the video image being processed that has been computed by a co-processor. Some examples of co-processors include FPGA, ASICs, and DSPs. The use of co-processors allows the imaging system to be designed with relatively simple and inexpensive microprocessors.
While cost effective, inexpensive microprocessors tend to have limited bandwidth interfaces to external devices. These microprocessors usually include a video port that is a high bandwidth interface that is configured to accept data in a digital video format. In the digital video format the data is organized in a contiguous block, thus allowing the processor to accept all data with minimum effort in handshaking, even if the data is transmitted in multiple, discontinuous transmission sessions. Allowing discontinuous video transmission frees up the processor bandwidth to access and process data between these sessions, while at the same time reducing the amount of data that the sending component must buffer.
In imaging systems with multiple systems or additional data to be processed, the overhead involved in switching between the different data destinations may be so high that the use of an inexpensive microprocessor is not feasible.
Interleaving multiple logical data streams into a one physical data stream reduces overhead required to process the multiple logical data streams. Data from at least two sources is transmitted to a processor via a single video or other high-speed port such as an IEEE 1394 or “Firewire” port, DMA channel, USB port, or other peripheral device that is configured to receive successive frames of data according to concurrently received timing signals. A first array of data, such as a frame of pixels, is received from a first data source such as a camera having a first frame rate and a second array of data is received from a second data source such as a camera or logical component that provides statistics about the first array of data. The first and second arrays of data are interleaved to form a combined array of data. One or more timing signals are synthesized such that the combined array of data can be transmitted at a synthesized data transmission rate that is usually higher than the first rate to allow the system to keep up with the data being produced by the first data source. The combined array of data is transmitted to the processor according to the synthesized timing signals to enable transmission of the combined array of data as a simulated single frame of data.
The processor can be programmed to locate the first and second arrays of data from within the combined array of data after the transmission is made. In the case where the port is a video port, the timing signals can include a synthesized pixel rate and horizontal and vertical synchronization pulses.
In one aspect, the present invention concerns a multicamera imaging-based bar code reader for imaging a target bar code on a target object, the bar code reader comprising: a housing supporting a plurality of transparent windows and defining an interior region, a target object being presented to the plurality of windows for imaging a target bar code; an imaging system including a plurality of camera assemblies coupled to an image processing system, each camera assembly of the plurality of camera assemblies being positioned within the housing interior position and defining a field of view, each camera assembly including a sensor array and an imaging lens assembly for focusing the field of view of the camera assembly onto the sensor array, for each camera assembly of the plurality of camera assemblies, the sensor array being read out to generate image frames of the field of view of the camera assembly; an image process system including a preprocessor and a preprocessor memory coupled to the preprocessor, a main system processor and a main system memory coupled to the processor, the preprocessor simultaneously receiving captured image data corresponding to image frames of each of the plurality of camera assemblies and storing the captured image data in the preprocessor memory; the preprocessor generating statistical information regarding the captured image data stored in the preprocessor memory, the statistical information being analyzed by a selected one of the preprocessor and the main system processor to identify regions of the image frames wherein an image of the target bar code or a part thereof may be present; the preprocessor transferring portions of captured image data corresponding to identified regions of the image frames to the main system memory; and the main system processor operating on the portions of the captured image data transferred to the main system memory and attempting to decode an image of the target bar code.
In another aspect, the present invention concerns a method of operating a multicamera imaging-based bar code reader for imaging a target bar code on a target object, the steps of the method comprising: providing a reader including a housing supporting a plurality of transparent windows and defining an interior region, a target object being presented to the plurality of windows for imaging a target bar code; an imaging system including a plurality of camera assemblies coupled to an image processing system, each camera assembly of the plurality of camera assemblies being positioned within the housing interior position and defining a field of view, each camera assembly including a sensor array and an imaging lens assembly for focusing the field of view of the camera assembly onto the sensor array; for each camera assembly of the plurality of camera assemblies, the sensor array being read out to generate image frames of the field of view of the camera assembly at periodic intervals; providing an image processing system including a preprocessor and a preprocessor memory coupled to the preprocessor, an image process system including a main processor and a main system memory coupled to the processor; operating the reader to image a target bar code on a target object, for each camera assembly of the plurality of camera assemblies, the sensor array being read out at a predetermined frame rate to generate image frames of the field of view of the camera assembly at periodic intervals, the preprocessor simultaneously receiving captured image data corresponding to image frames of each of the plurality of camera assemblies and storing the captured image data in the preprocessor memory; the preprocessor generating statistical information regarding the captured image data stored in the preprocessor memory, the statistical information being analyzed by a selected one of the preprocessor and the main system processor to identify regions of the image frames wherein an image of the target bar code or a part thereof may be present; the preprocessor transferring portions of captured image data corresponding to identified regions of the image frames to the main system memory; and the main system processor operating on the portions of the captured image data transferred to the main system memory and attempting to decode an image of the target bar code.
In another aspect, the present invention concerns: an imaging system for use in multicamera imaging-based bar code reader having a housing supporting a plurality of transparent windows and defining an interior region, a target object being presented to the plurality of windows for imaging a target bar code on a target object, the imaging system comprising: a plurality of camera assemblies coupled to an image processing system, each camera assembly of the plurality of camera assemblies being positioned within the housing interior position and defining a field of view, each camera assembly including a sensor array and an imaging lens assembly for focusing the field of view of the camera assembly onto the sensor array, for each camera assembly of the plurality of camera assemblies, the sensor array being read out to generate image frames of the field of view of the camera assembly; an image process system including a preprocessor and a preprocessor memory coupled to the preprocessor, a main processor and a main system memory coupled to the processor, the preprocessor simultaneously receiving captured image data corresponding to image frames of each of the plurality of camera assemblies and storing the captured image data in the preprocessor memory; the preprocessor generating statistical information regarding the captured image data stored in the preprocessor memory, the statistical information being analyzed by a selected one of the preprocessor and the main system processor to identify regions of the image frames wherein an image of the target bar code or a part thereof may be present; the preprocessor transferring portions of captured image data corresponding to identified regions of the image frames to the main system memory; and the main system processor operating on the portions of the captured image data transferred to the main system memory and attempting to decode an image of the target bar code.
These and other objects, advantages, and features of the exemplary embodiment of the invention are described in detail in conjunction with the accompanying drawings.
Referring to
In order to decode barcodes, an imaging scanner often uses a set of statistics from the image. The CPU uses the statistical information to identify portions of the image that likely contain the barcode or target. One example of a scanning system that employs statistics in the processing of images containing barcodes can be found in U.S. Pat. No. 6,340,114 to Correa et al. and assigned to the assignee of the present invention. The '114 patent is incorporated herein by reference in its entirety. In some imaging scanners the statistics are computed in software on an as-needed basis to identify the barcode in the image. This process of computing the statistics is relatively slow and generally not acceptable for high-speed scanners, especially those that include more than one camera. For these high speed cameras the image statistics can be computed in hardware by a custom component such as an FPGA on the video data as it is acquired from the camera. In this manner the statistics for all regions of the image can be computed in the same time it takes to acquire the image from the camera. However, the statistics must still be communicated to the CPU along with the video data.
Many CPUs have external DMA channels to efficiently transfer data to memory from an external device. However, when data is to be routed to different destinations, such as one or more streams of simultaneously generated video and statistics data, one DMA channel per data stream is usually required. Thus, each DMA channel is set up once and receives a large block of data that is sent in multiple small pieces to be stored sequentially in the CPU's memory. As the number of external data sources, such as cameras, increases the number of DMA channels may become a limiting factor in the capability of the CPU to receive data. For example, in many cases communications between the processor and an external device, such as a host computer or the internet, may require the use of a DMA channel. Therefore, it would be advantageous to use a single DMA channel for all camera data, even when multiple data streams are present. However, it would also be preferable that the DMA channel need not be set up multiple times (such as hundreds of times) within one image frame time, because each set up process is relatively expensive in terms of time and processing resources.
One possible approach to importing data into the CPU from multiple data sources is to use external memory to buffer the streams of data so that they are delivered to the CPU when large contiguous segments of data become ready to be sent. For example, if data corresponding to one complete frame of the image is buffered for each of two video streams, it is feasible to send the data to the CPU as two consecutive frames, each in a contiguous segment or together in a single segment. This technique reduces the overhead of DMA set up, however external memory for such a large amount of buffering can be prohibitively expensive.
Another solution to importing data from multiple sources on a single DMA channel is to interleave the multiple logical data streams into one physical data stream with very limited buffering for each of the logical data streams. The combined physical data can be transported into one contiguous block of processor memory. By organizing the data into one large stream (and memory block) the amount of overhead to switch between data streams is reduced significantly.
Certain inexpensive processors, such as the Freescale.RTM. MC9328MXL, lack sufficient support for efficient external DMA channels. One feasible way to input the large amount of data in the image and its statistics to the CPU is through the video or camera interface port. The video port is designed to accept only one video stream at a time, although there is often flexibility to define the video format, including the video frame size. When there is more than one data stream, such as one video stream plus one stream of statistics, the several data streams can be interleaved to form one physical stream of data. In the same way, video streams from multiple cameras can be interleaved into one stream, in the format of a video stream with a larger video frame, for communication through the video port. This technique works on any processor with a camera interface (digital video) port, and is especially advantageous when the digital video port is the only high bandwidth external interface. For example, this approach may be particularly useful in camera-enabled mobile phones where multiple cameras are deployed, or to improve the barcode reading performance of camera-enabled phones. The described embodiment communicates the resulting physical data stream through a video or camera port. However, the resulting physical data stream can be communicated through any high speed DMA channel, USB port, IEEE 1394 port, or other peripheral interface of a CPU.
The organization of the interleaved data is determined by the FPGA and can be arbitrarily selected to fit a particular need. The formats shown in
One possible method 200 that can be employed by the FPGA to construct simulated video data for input into a video port is shown in flowchart form in
It can be seen from the foregoing description that interleaving multiple logical data streams and formatting them as a simulated single frame of data can allow the transfer of large quantities of data from multiple sources into a CPU using a single video port.
Bi-Optic Imaging Scanner
One exemplary embodiment of a multicamera imaging-based bar code reader, often referred to as an imaging bi-optic scanner, is shown generally at 300 in
Each of the six imaging camera assemblies C1-C6 are mounted on a planar mounting board, such as a printed circuit PC board 312, disposed within an interior region 312 of the housing 306. The camera assemblies C1-C6, when actuated by the imaging system 304 produce raw gray scale images of their respective fields of view FV1-FV6. Mounting the camera assemblies C1-C6 onto the PC board 312 advantageously permits easy assembly of the camera assemblies and precise alignment of the camera assemblies within the housing 306. As can best be seen in
As can be seen in
In one exemplary embodiment, the sensor array 316 is a Micron WVGA, 60 fps, global shutter sensor. In one exemplary embodiment, imaging lens assemblies 318 of the middle two camera assemblies C2, C5 are non-anamorphic with approximately a 12 inch focus, while the remaining four corner camera assemblies C1, C3, C4, C6 are anamorphic with approximately 20 inch focus. Anamorphic imaging lenses advantageously have an aspect ratio of the field of view FV such that the field of view properly “fits” or corresponds to the size of the horizontal or vertical window H that the field of view is projected through and provides for increased resolution in one axis for tilted bar codes. In one exemplary embodiment, each of the imaging camera assemblies C1-C6 provide sequential image capture every 16 milliseconds.
The camera assemblies C1-C6 further include an illumination assembly 320 comprising a pair of surface mount LEDs 322 flanking the sensor array 316 and a corresponding pair of illumination lenses 324 positioned above each of the LEDs 322. The optics 324 of the illumination assemblies 320 provide for narrow fields of view of the illumination assemblies to substantially match the narrow fields of view FV1-FV6 of the imaging lens assemblies 318.
As can be seen from the imaging ray traces for the camera assemblies C1-C6 shown in
The imaging system 304 includes an image processing system 330 that operates on image data 350 (shown schematically
Each camera assembly C1-C6 of the imaging system 304 captures a series of image frames of its respective field of view FV1-FV6. The series of image frames for each camera assembly C1-C6 is shown schematically as IF1, IF2, IF3, IF4, IF5, IF6 in
For each camera assembly C1-C6, electrical signals are generated by reading out of some or all of the pixels of the pixel array after an exposure period generating a gray scale value digital signal. This occurs as follows: within each camera, the light receiving photosensor/pixels of the sensor array are charged during an exposure period. Upon reading out of the pixels of the sensor array, an analog voltage signal is generated whose magnitude corresponds to the charge of each pixel read out.
The image signals of each camera assembly C1-C6 represents a sequence of photosensor voltage values, the magnitude of each value representing an intensity of the reflected light received by a photosensor/pixel during an exposure period. Processing circuitry of each of the camera assemblies C1-C6, including gain and digitizing circuitry (not shown), then digitizes and coverts the analog signal into a digital signal whose magnitude corresponds to raw gray scale values of the pixels. The series of gray scale values GSV (
The image processing system 330 controls operation of the cameras C1-C6. The cameras C1-C6, when operated during a bar code imaging/reading session, generate digital signals. The signals are raw, digitized gray scale values which correspond to a series of generated image frames for each camera C1-C6. The image data 350 includes, for example, for the camera C1, signals 350a corresponding to digitized gray scale values corresponding to a series of image frames IF1, for the camera C2, signals 350b corresponding to digitized gray scale values corresponding to a series of image frame IF2, for the camera C3, signals 350c corresponding to digitized gray scale values corresponding to a series of image frames IF3, for the camera C4, signals 350d corresponding to digitized gray scale values corresponding to a series of image frame IF4, for the camera C5, signals 350e corresponding to digitized gray scale values corresponding to a series of image frames IF5, for the camera C6, signals 350f corresponding to digitized gray scale values corresponding to a series of image frame IF6.
The image processing system 330 comprises the preprocessor 332, the preprocessor memory 334, a main system processor 336 and a main system memory 338. Advantageously, the scanner 300 employs the preprocessor 332 and the preprocessor memory 334 such that the image data 350 from all six camera assemblies is stored in the preprocessor memory 334 and analyzed by the preprocessor 332 to generate statistical information SI regarding the image data 350. Based on the statistical information SI generated by the preprocessor 332, only a selected portion 350′ of the totality of the image data 350 is transferred to the main system memory 338 and used by the main system processor 336 in an attempt to decode an image 308′ of the target bar code 308 found in the selected image data 350′. The selected image data 350′ is shown schematically in
The statistical information SI generated by the preprocessor 332 includes information regarding the probability and/or location of an image 308′ of the target bar code 308 or some portion of the imaged target bar code 308′ within a captured image frame. In other words, the statistical information SI facilitates the determination of whether a particular image frame in the sequence of image frames IF1-IF6 being continuously generated by a respective image cameras C1-C6: 1) has an imaged bar code 308′ or a portion of an imaged bar code 308′ within the image frame?; and 2) if so, where is the imaged bar code 308′ or portion thereof located within the image frame gray scale values?
By way of example and without limitation of the statistical information SI to any particular characteristics or methods, the statistical analysis performed by the preprocessor 332 for an image frame in the series of image frames IF1-IF6 may include dividing the image frame into nonoverlapping 8×8 pixel squares. For each pixel square, the preprocessor 332 may calculate statistics including: 1) the average pixel gray scale brightness value (i.e., 0 gray scale value being extremely dark and 255 gray scale value being extremely bright or light); 2) the white level (corresponding to the third brightest pixel in the square); 3) the dark level (corresponding to the third darkest pixel in the square); 4) the contrast (an average gray scale value of the white level and the dark level); 5) the primary vector; and 6) the secondary vector, among others.
At a conceptual, schematic level, the primary vector can be thought of as follows: The primary vector represents the direction and magnitude of the maximum contrast change within the 8×8 pixel block. For example, if there was an 8×8 pixel block with 32 pixels on the right half of the block being very light (high gray scale values) and 32 pixels on the left half of the block being very dark (low gray scale values), the primary vector would be a vertical oriented vector line dividing the 32 bright and the 32 dark pixels. The primary vector includes both a direction and magnitude wherein the magnitude is a function of how large the difference is between the gray scale values of the pixels on opposite sides of the primary vector. In one exemplary embodiment, to limit the scope of the calculations needed to be performed by the preprocessor 332, the direction of the primary vector may be limited to one of 20 directions (the directions equally divided from −90° to +90° degrees). In an exemplary embodiment, schematically what is done is that for each pixel in the 8×8 pixel block or array, a difference between the gray scale value of the pixel and each of its closest neighbors to the top, bottom, left and right is calculated. From this, an approximate direction and magnitude of the contrast change between the pixel and its neighbors is calculated. The magnitude is added to one of 20 buckets or running sums corresponding to the 20 directions. After all the pixels are processed in this way, the direction corresponding to the bucket with the largest running sum is selected as the direction of the primary vector and the running sum from this bucket is the primary vector magnitude.
After the primary vector is found, a secondary vector may be found wherein the secondary vector is a line that produces the second largest direction of contrast change after the primary vector. The secondary vector is useful in locating regions of an image frame where a 2D imaged bar code 308′ may be present. As noted earlier, certain imaging statistics are discussed in detail in the aforesaid U.S. Pat. No. 6,340,114 to Correa et al., which is incorporated by reference in its entirety herein.
As will be explained below, after the statistical information SI is generated by the preprocessor 332, the information SI is subsequently analyzed by either the preprocessor 332 or the main system processor 336. The analysis of the statistical information SI is performed to identify a region or regions of one or more image frames IF1-IF6 where a bar code image 308′ is likely to be found or may be found in an image frame. For example, the analysis of statistical information SI would include looking a region of an image frame wherein a group of adjacent pixel squares have similar primary vector directions and magnitude. Such a situation would be indicative of lines in adjacent pixel squares each extending in the same direction. Such a grouping of lines extending in the same direction would likely be an image 308′ of the target bar code 308, for example, an imaged ID bar code.
Use of the preprocessor 332 and preprocessor memory 334 advantageously minimizes the required direct memory access (DMA) memory bandwidth required by the main system memory 338 and the main processor 336 by reducing the amount of image data 350 required to be transferred to the main system memory 338. Further, available main system processors 336 do not include six image sensor ports. The preprocessor 332 includes six image sensor ports 332a-332f and comprises digital logic, memory and may, optionally, contain a CPU, thus, the preprocessor 332 of the present invention allows simultaneous reception and analysis of video data 350 from all six camera assemblies C1-C6. The preprocessor 332 may be implemented with a field programmable gate array (FPGA), application specific integrated circuit (ASIC), or as discrete components.
When the imaging system 304 is actuated from the sleep mode to the imaging session mode, the preprocessor 332 simultaneously receives the raw image data 35 generated by the camera assemblies C1-C6 via data lines DL1-DL6 that are coupled from the camera assemblies C1-C6 to video input ports 332a-332f of the preprocessor 332. The main system processor 336 includes decoding circuitry or a decoder 340 that analyzes the gray scale image data from the cameras C1-C6 and decodes an image 308′ of the target bar code 308, if present in the image data analyzed by the main system processor 336.
In one exemplary method of operation, the imaging system 304 and image processing system 330 assembly operate as shown generally at 400 in the flowchart of
At step 414, the statistical information SI generated by the preprocessor 332 is transferred to the main system memory 338. At step 416, the main system processor 336 analyzes the statistical data to determine, at step 418, whether the statistical information SI indicates that an imaged bar code 308′ or a portion of an imaged bar code 308′ is present in one or more of the series of image frames IF1-IF6. If the answer to the determination made by the main processor 336 at step 416 is yes, then, at step 420, the main processor 336 will instruct the preprocessor 332 to retrieve and transfer a selected portion 350′ of the captured image data 350 stored in the preprocessor memory 334 that corresponds to the area/location or areas/locations of one or more image frames IF1-IF6 where the main processor 336 has determined an imaged bar code 308′ or portion thereof is present. The selected image data 350′ is transferred from the preprocessor memory 334 to the main system memory 338 via a reduced bandwidth DMA channel 342.
At step 418, if it is determined by the main processor 336 that the statistical information SI generated by the preprocessor 332 does not indicate the present of an imaged bar code 308′, then the process returns to step 406 where newly generated image data 350 is analyzed by the preprocessor 332. It should be noted, of course, that during the course of a bar code imaging/reading session, the preprocessor memory 334 continuously stores image data 350 as image data is continually generated by the imaging camera C1-C6 during the session and, correspondingly, the preprocessor 332 continuously analyzes the generated image data 350 and continues to generate statistical information SI regarding the generated image data.
At step 422, when the selected portion 350′ of the captured image data 350 corresponding to an imaged bar code 308′ is transferred from the preprocessor memory 334 to the main processor memory 338, the main system processor 338 (including the decoder 340) acts on the selected image data 350′ and attempts to decode the imaged bar code 308′. At step 424, if decoding of the imaged bar code 308′ from the selected image data 350′ is successful, then, at step 426, an indication is given of a successful decode and, at step 428, the imaging session is terminated.
Upon a successful reading of the target bar code 308, decoded data 356, representative of the data/information coded in the target bar code 308 is then output via a data output port 358 (
An alternate exemplary embodiment of a method of operation of the scanner 300 and particularly the image processing system 330 is shown generally at 500 in the flowchart of
At step 516, the preprocessor 332 analyzes the statistical data/information SI to determine, at step 418, whether the statistical information SI indicates that an imaged bar code 308′ or a portion of an imaged bar code 308′ is present in one or more of the series of image frames IF1-IF6. If the answer to the determination by the preprocessor 332 at step 516 is yes, then, at step 520, the preprocessor 332 will transfer a portion 350′ of the captured image data 350 is stored in the preprocessor memory 334 that corresponds to the location or locations of one or more image frames IF1-IF6 where the preprocessor 336 has determined an imaged bar code 308′ or portion thereof is present.
At step 516, if it is determined by the preprocessor 332 that the statistical information SI does not indicate the present of an imaged bar code 308′, then the process returns to step 506 where newly generated image data 350 is analyzed by the preprocessor 332. It should be noted, of course, that during the course of a bar code imaging/reading session, the preprocessor memory 334 continuously stores image data 350 because image data is continually generated by the imaging camera C1-C6 during the session and, correspondingly, the preprocessor 332 continuously analyzes the generated image data 350 and continues to generate statistical information SI regarding the generated image data.
At step 522, when the selected portion 350′ of the captured image data 350 corresponding to an imaged bar code 308′ is transferred from the preprocessor memory 334 to the main processor memory 338, the main system processor 338 (including the decoder 340) acts on the selected image data 350′ and attempts to decode the imaged bar code 308′. At step 524, if decoding of the imaged bar code 308′ from the selected image data is successful, then, at step 526, an indication is given of a successful decode and, at step 458, the imaging session is terminated. If at step 524, decoding is not successful, the process reverts back to step 406, as discussed previously.
Although the present invention has been described with a certain degree of particularity, it should be understood that various changes can be made by those skilled in the art without departing from the spirit or scope of the invention as hereinafter claimed.
The present application is a continuation-in-part of and claims priority from U.S. application Ser. No. 11/240,420, filed Sep. 30, 2005, to be issued as U.S. Pat. No. 7,430,682 on Sep. 30, 2008. application Ser. No. 11/240,420 is incorporated herein in its entirety for any and all purposes.
Number | Date | Country | |
---|---|---|---|
Parent | 11240420 | Sep 2005 | US |
Child | 12240385 | US |