The field of the present disclosure relates generally to systems, apparatus, and methods for data reading and/or image capture and more particularly to systems, apparatus, and methods to couple imaging devices to a processing device.
Data reading devices are used to read optical codes, acquire data, and capture a variety of images. Optical codes typically comprise a pattern of dark elements and light spaces. There are various types of optical codes, including one-dimensional codes, such as a Universal Product Code (“UPC”) and EAN/JAN codes, and stacked and two-dimensional codes, such as PDF417 and Maxicode codes.
Data reading devices are well known for reading UPC and other types of optical codes on items (or objects), particularly in retail stores. As an optical code is passed through a view volume of the data reading device, the optical code is scanned and read by the data reading device to create electrical signals. The electrical signals can be decoded into alphanumerical characters or other data that can be used as input to a data processing system, such as a point of sale (POS) terminal (e.g., an electronic cash register). The POS terminal can use the decoded data to, for example, look up a price for the item, apply electronic coupons, and award points for a retailer or other rewards program. Scanning an optical code on items may enable, for example, rapid totaling of the prices of multiple such items.
One common data reading device is an imaging reader that employs an imaging device or sensor array, such as a charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) device. The imaging device generates electronic image data, typically in digital form. The image data is then processed, for example, to find and decode an optical code. An imaging reader can be configured to read both 1-D and 2-D optical codes, as well as other types of optical codes or symbols and images of other items.
An imaging reader may be capable of scanning multiple sides of an item, for example, by utilizing a plurality of imagers. The plurality of imagers may be arranged in a bi-optic configuration that may include multiple (e.g., two) scanner windows. By way of example, and not limitation, an “L-shaped” bi-optic data reading device may include a horizontal bottom scanner that is generally positioned at counter level and a vertical scanner that is positioned to scan one or more sides of an item. By scanning one or more sides of an item, an imaging reader having a plurality of imagers may increase probability of a successful first scan (i.e., improved first pass read rate) and may reduce time-consuming product manipulations and repeat scans by operators.
The images captured by the imager(s) of an imaging reader are processed to identify and decode an optical code on an item passed through a view volume of the imaging reader. Generally a processing device is associated with each imager to provide processing of captured image data. A processing device may have a single imager interface and may only be capable of processing the output of a single imager at any given time. As a result, a plurality of processing devices may be needed in an imaging reader having a plurality of imagers and additional hardware and/or software may be needed to coordinate cooperation between the plurality of processing devices. Thus, the present inventors have recognized, among other things, the desirability to reduce complexity and hardware in a multi-imager data reading device.
Understanding that drawings depict only certain preferred embodiments and are therefore not to be considered limiting in nature, the preferred embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings.
With reference to the drawings, this section describes particular embodiments and their detailed construction and operation. The embodiments described herein are set forth by way of illustration only and not limitation. The described features, structures, characteristics, and methods of operation may be combined in any suitable manner in one or more embodiments. In view of the disclosure herein, those skilled in the art will recognize that the various embodiments can be practiced without one or more of the specific details or with other methods, components, materials, or the like. In other instances, well-known structures, materials, or methods of operation are not shown or not described in detail to avoid obscuring more pertinent aspects of the embodiments.
Various dynamic intelligent imager switching systems, apparatus, and methods are described herein and can be utilized in various imager-based data readers, reading systems, and associated methods. Some embodiments of these data readers and systems may provide for improved or enhanced reading performance by providing multiple image fields to capture multiple views. In the following description of the figures and any example embodiments, it should be understood that any image fields or fields of view related to any imager may be partitioned into two or more regions, each of which may be used to capture a separate view or perspective of the view volume. In addition to providing more views than imagers, such embodiments may enhance the effective view volume beyond the view volume available to a single imager having a single field of view.
In the following description of the figures and any example embodiments, it should be understood that use of a data reader having the described features in a retail establishment is merely one use for such a system and should not be considered as limiting. By way of example, and not limitation, another use for data readers with the characteristics and features described herein may be in an industrial location such as a parcel distribution (e.g., postal) station.
The data reader 100 may have a frame, which may include a lower housing section 105 and an upper cover or platter section 130. In some embodiments, a portion or all of the cover or platter section 130 may be a weigh platter operable for weighing the item 20. The illustrated data reader 100 is typically installed into a countertop or work surface 170 (indicated by dashed line) of a checkstand such that the horizontal surface 132 of the platter 130 is flush, or substantially level, with the countertop or work surface 170 of the checkstand.
Typical or example positions of an operator (e.g., a check-out clerk 38) and a customer 40 are shown to facilitate description and to establish a frame of reference, and are not intended to be limiting. The check-out clerk 38 may typically stand or sit adjacent to a checker side 124 of the data reader 100, and away from an opposing customer side 122 of the data reader 100. The check-out clerk 38 may move or transport the item 20 across the horizontal surface 132 in the direction of motion 22. Although the direction of motion in
The data reader 100 may include one or more imagers (see
The imagers, such as the imager behind the window 115, may be operative to capture image data including optical codes on item surfaces facing away from the check-out clerk 38 (such as on the customer side 36 of the item 20), without interfering with the check-out clerk's 38 limbs as the item 20 is moved through the view volume. For capturing image data including optical codes disposed on the checker side 34 of the item 20, an imager may be positioned behind the window 160 with the imager's FOV directed across the platter 130 toward the customer side 122.
The data reader 100 may further include an upwardly extending post 175 extending along a vertical axis that may be generally transverse or even perpendicular in relation to the horizontal surface 132 of the platter 130. The post 175 may include a vertically elongated post body 176 having a first end 177 and an opposing second end 178. The post 175 may be mounted or otherwise secured to the platter 130 adjacent the first end 177 and may include a housing structure 179 supported adjacent the second end 178. The housing structure 179 may be sized and dimensioned to house an imager operable to capture a top-down view of the item 20. Additional details of the imager and its components are discussed below with reference to
It should be understood that the described arrangements are meant only to illustrate example embodiments and other arrangements for the post 175 not specifically described herein may be possible.
Components of the imaging system are described with reference to a given imager 205. It should be understood that the other two imagers 210 and 215 may have substantially similar features and characteristics as those described with respect to imager 205. Accordingly, individual features of imaging systems 210 and 215 may be generally described herein.
With reference to
The imager 205, the imager 210, and the imager 215 may be arranged in a variety of configurations and, for purposes of the present disclosure, it should be understood that they are not limited to the positioning or functions described above. Rather, the diagram of
Provisional Patent Application No. 61/657,634, entitled OPTICAL SCANNER WITH TOP DOWN READER, attorney docket no. 51306/1605, filed Jun. 8, 2012, U.S. patent application Ser. No. 13/895,258, entitled OPTICAL SCANNER WITH TOP DOWN READER, attorney docket no. 51306/1606, filed May 15, 2013, U.S. Provisional Patent Application No. 61/657,660, entitled IMAGING READER WITH IMPROVED ILLUMINATION SYSTEM, attorney docket no. 51306/1610, filed Jun. 8, 2012, and U.S. patent application Ser. No. 13/911,854, entitled IMAGING READER WITH IMPROVED ILLUMINATION SYSTEM, attorney docket no. 51306/1611, filed Jun. 6, 2013, each of which is hereby incorporated herein in its entirety.
The imager 205, the imager 210, and the imager 215 may capture image data that may be processed by a processing device to identify and/or decode optical codes. Typically, a processing device would be associated with each of the imagers 205, 210, 215, thus requiring three processing devices in the illustrated embodiment of
The switching logic 302 of the dynamic intelligent imager switch 301 may be configured to receive data from the plurality of imagers 326 and select data from one of the plurality of imagers 326 to output (e.g., pass or forward) to the processor interface 308 and/or the processing device 328. Described differently, the switching logic 302 receives data at multiple inputs 310 and selects and forwards all or a portion of the data of only one of the inputs 310 at a single output 312. For example, the presently selected imager 330 may provide data that is received by the switching logic 302 at one of the multiple inputs 310 and the switching logic may forward all or a portion of that data to the output 312. The data received from the plurality of imagers 326 includes image data captured by the plurality of imagers 326. The switching logic 302 may also be configured to receive at the inputs 310 various other data, such as control data and/or configuration data that accompanies the image data from the plurality of imagers 326. For example, the control data and/or configuration data may include, but is not limited to, a clock, a clock rate, an exposure time, an exposure rate, a frame rate, a read-out time, a pixel clock, and the like. In another embodiment, the switching logic may also pass data (e.g., control data or configuration data) from the output 312 (e.g., from the processor interface 308 and/or processing device 328) back to the inputs 310 (e.g., to the plurality of imager interfaces 306 and/or the plurality of imagers 326).
As the data is received from the plurality of imagers 326 at the multiple inputs 310, a portion or set of the image data from a presently selected imager 330 (one of the plurality of imagers 326 that is coupled to a presently selected imager interface 306) may include a complete image frame containing substantially all the pixels captured by the presently selected imager 330. The complete image frame is passed, in the set of image data, to the output 312 of the switching logic 302 for forwarding to the detection logic 304 and/or the processor interface 308. In certain embodiments, a partial image frame, or other desired portion of an image frame or image data, may be passed to the output 312 of the switching logic 302 for forwarding to the detection logic 304 and/or the processor interface 308. The partial image frame may include desired image data (e.g., an optical code).
The single output 312 of the switching logic 302 may be coupled to the processor interface 308 and data passed to the output 312 of the switching logic 302 may be forwarded to the processor interface 308 and on to the processing device 328 coupled thereto. The data passed to the output 312 may include received image data, including a complete image frame (or another desired portion of an image frame). The data passed to the output 312 by the switching logic 302 may also include various control data and/or configuration data accompanying the image data. Accordingly, the detection logic 304 and/or the processor interface 308 may be able to receive and/or forward any of the image data, the control data, and/or the configuration data received from the presently selected imager 330.
The switching logic 302 selects which input 310 to forward to the output 312 based on a selection input 314 received from the detection logic 304. In other words, the switching logic 302 selects a presently selected imager interface 332, and therefore the presently selected imager 330, based on the selection input 314 received from the detection logic 304. As described more fully below, the detection logic 304 determines the selection input based on detection of a complete image frame (or other desired portion or expected portion of image data) being forwarded to the output 312 of the switching logic 302.
In one embodiment, the switching logic 302 may be a multiplexer configured to receive multiple inputs 310 and to select and pass (or forward) one of the inputs unchanged as an output 312. In another embodiment, the switching logic 302 may include hardware and/or circuitry configured to select one of multiple inputs 310 to forward all or a portion thereof as an output 312. In another embodiment, the switching logic 302 may include a combination of hardware and software components. In still another embodiment, the switching logic 302 may include embedded instructions. In still another embodiment, the switching logic 302 may include software modules. In still another embodiment, the switching logic 302 may include a demultiplexer configured to pass data back from the output 312 (e.g., from the processor interface 308) to the inputs 312 (e.g., to the plurality of imager interfaces 306).
The detection logic 304 is coupled to the switching logic 302 and may be configured to detect and determine an appropriate time to switch the presently selected imager interface from a given imager interface to a different imager interface. The detection logic 304 may receive all or at least a portion of the output 312 of the switching logic 302. The detection logic 304 may receive the output 312 of the switching logic 302 as an input 316 and detect, for example, when a complete image frame is passed to the processor interface 308 and/or the processing device 328. In other words, the detection logic 304 may detect when a complete image frame (e.g., all, or substantially all, the pixels of a complete image as captured by one of the plurality of imagers 326) has been forwarded to the processing device 328. In another embodiment, the detection logic 304 may be configured to detect a partial image frame or an expected amount of image data. For example, the detection logic 304 may detect a given number of pixels (e.g. an expected number of pixels), which may be a partial image frame.
Once an expected or desired portion of image data (e.g., a complete image frame, a partial image frame, a given number of pixels, etc.) is detected (and forwarded to the processor interface 308 and/or the processing device 328), the detection logic 304 may provide an output 318 that is communicated to the switching logic 302 to instruct or otherwise signal or indicate to the switching logic 302 which input 310 to select to pass (or forward) to the output 312. In other words, the output 318 of the detection logic 304 may be communicated to the switching logic 302 as the selection input 314 of the switching logic 302.
The detection logic 304 may detect an expected or desired portion of image data (e.g., a complete image frame, a partial image frame, a given number of pixels, etc.) based on one of various methodologies. In one embodiment, as shown in
In another embodiment, the detection logic 304 may detect a desired amount of image data using more complicated algorithms, such as image completion or dead time detection algorithms. The algorithms may be implemented in hardware and/or software. An example algorithm may be implemented according to the following pseudo code:
In one embodiment, the detection logic 304 may include hardware and/or circuitry. In another embodiment, the detection logic 304 may include embedded instructions, for example to implement detection logic and/or apply image completion algorithms and/or dead time detection algorithms. In still another embodiment, the detection logic 304 may include a combination of hardware and software components to implement detection logic and/or apply image completion algorithms and/or dead time detection algorithms. In still another embodiment, the detection logic 304 may include software modules to implement detection logic and/or apply image completion algorithms and/or dead time detection algorithms.
In one embodiment, the detection logic 304 may detect complete image frames because the processing device 328 may be configured to process complete image frames to perform a function. By way of example, and not limitation, the processing device 328 may be configured to identify and/or decode optical codes, such as in a retail POS (point of sale) environment. In such an embodiment, if only partial images (i.e., partial or incomplete image frames) were forwarded to the processing device 328, then the processing device 328 may be limited in performing its intended function, such as identifying and decoding optical codes on items passed through a view volume of a POS. In other embodiments, the processing device 328 may be capable of processing partial image frames to perform a desired function, such as identifying and decoding optical codes.
Once a desired a portion of an image frame (e.g., a complete image frame or a partial image frame) is forwarded on to the processor interface 308 and/or the processing device 328, the presently selected imager interface 332 may be changed. In other words, the detection logic 304 may provide data or a signal at the output 318 that instructs or otherwise indicates to the switching logic 302 to change the presently selected imager interface 332 to another imager interface of the plurality of imager interfaces 306. For example, the presently selected imager 330 may be a first imager interface 340 of the plurality of imager interfaces 306 and the detection logic 304 may change the presently selected imager interface 332 to be a second imager interface 342 of the plurality of imager interfaces 306. Accordingly, data of the second imager interface 342, including image data, is passed to the output 312 of the switching logic 302 and in turn to the processor interface 308 and the processing device 328 coupled thereto. The detection logic 304 again detects when a complete image frame is output to the processor interface 308 and/or the processing device 328 as described above. The detection logic 304 also again determines when to switch the presently selected imager interface from the second imager interface 342 to another of the plurality of imager interfaces 306.
The detection logic 304 may further include, or be coupled to, biasing logic 350 that can be configured to impose a pre-defined bias on the switching order. A particular imager of the plurality of imagers 326 may be better suited for capturing desired image data. By way of example, and not limitation, a bottom imager of an imaging reader may tend to more frequently capture images containing optical codes because an operator may tend to direct the optical code on an item downward into a glass plate (based on an assumption that the window is where scanning occurs). Accordingly, in a given situation, it may be desirable to have the processing device 328 process multiple images from a particular imager for every image from another imager. For example, it may be desirable to process three images captured by a first imager for every image that is processed from a second imager. Accordingly, a bias of 3 to 1 could be pre-defined and/or configured in the biasing logic 350. The biasing logic 350 may also enable the bias to be updated or modified dynamically based on a load, a need (e.g., a change in an operator/checker having different scanning habits than a previous operator/checker), and/or a success rate (e.g., a particular image tends to make more or most of the successful reads). In one embodiment, the biasing logic 350 may comprise one or more counters to track the number of complete frames received from one or more imagers. An embodiment of biasing logic 350 is discussed more fully below with reference to
The dynamic intelligent imager switch 301 may be appropriately configured to produce a clean and continuous stream of image data that comprises images captured by a plurality of imagers and that can be presented to the processing device 328 as if from a single imager 326. In this manner, the dynamic intelligent imager switch 301 may enable use of a single processing device 328 in imaging reader applications (e.g., systems, scanners) utilizing a plurality of imagers. The processing device 328 can be utilized in the same method and manner that it would be used were it coupled to a single imager 326. In other words, a processing device 328 that is designed and/or configured for use with a single imager in an imaging reader system may now be used, unchanged and without reconfiguration, to process image data received from a plurality of imagers 326.
The appropriate configuration of the dynamic intelligent imager switch 301 may be dictated based on requirements of the processing device 328. Consideration may be given to requirements of the processing device 328 including, but not limited to, an exposure time, a frame rate, a refresh rate, and additional clocks needed before or after receiving a complete image frame or other desired portion of the image frame (e.g., to satisfy a state machine or other hardware requirements or limitations inherent in the processing device 328).
In
The block diagram of the example dynamic intelligent imager switch 301 of
The camera A 402 and the camera B 404 may comprise a CCD (charge coupled device), CMOS (complementary metal oxide semiconductor) device, imaging array, or other suitable device for generating electronic image data in digital form. The FOV of the camera A 402 and the camera B 404 may be directed to a view volume (see
Similarly, image data captured by the camera B 404 is output on a connection CAM_B_DATA 462 to a second imager interface 434 of the dynamic intelligent imager switch 406. Additional data, which may include control data and/or configuration data, may also be output by the camera B 404 on a connection to the second imager interface 434. In the illustrated embodiment of
As illustrated in
The dynamic intelligent imager switch 406 may receive data at a plurality of imager interfaces, such as the first imager interface 432 and the second imager interface 434, and dynamically and intelligently pass (or forward) to the processor interface 436 all or a portion of the data received at a presently selected imager interface. For example, if the presently selected imager interface were the first imager interface 432, the dynamic intelligent imager switch 406 may be configured to pass (or forward) to the processor interface 436 all or a portion of the data received from the camera A 402. Furthermore, the dynamic intelligent imager switch 406 may also dynamically and intelligently change the presently selected imager interface to be the second imager interface 434. The dynamic intelligent imager switch 406 may perform dynamic intelligent switching, for example, by detecting when a complete image frame, received at the first imager interface 432 (also the presently selected imager interface), has been passed (or forwarded) to the processor interface 436 and/or the processor 408 and then switching the presently selected imager interface to the second imager interface 434 after the complete frame has been passed to the processor interface 436 and/or the processor 408. The switching between imager interfaces may be accomplished automatically. Manual switching and/or external input may be unnecessary. In another embodiment, the dynamic intelligent imager switch 406 may perform dynamic intelligent switching, for example, by detecting imager dead time (e.g., a period after a complete image frame is read out and before a next image frame begins to be read out) of a presently selected imager. The dead time may be an indication that a complete image frame has been read out and that switching between imager interfaces can be accomplished safely (e.g., without corrupting, damaging, and/or comingling image data in a stream of data presented to the processor 408.
In the embodiment of
The detection logic 412 detects when a complete image frame has been communicated to the processor interface 436 and/or the processor 408. More specifically, the detection logic 412 may receive as an input 426 all, or at least a portion, of the output 422 of the switching logic 410 and detect when a complete image frame is passed from the output 422 of the switching logic 410 to the processor interface 436. Once a complete image frame is detected (and passed to the processor interface 436 and/or the processor 408), the detection logic 412 may provide an output 428 that is communicated to the switching logic 410 to instruct or otherwise indicate to the switching logic 410 which input 420 to select to pass to the output 422. In other words, the output 428 of the detection logic 412 may be communicated to the switching logic 410 as the selection input 424 of the switching logic 410.
The detection logic 412 may detect a complete image by monitoring a vertical sync (VSYNC) signal received at the presently selected imager interface. The VSYNC signal of the presently selected imager interface, for example the first imager interface 432, may be received at an input 420 of the switching logic 410, passed to the output 422 of the switching logic 410, and received at an input 426 of the detection logic 412. For example, the VSYNC signal of the camera A 402 may initially be low (e.g., negative or zero) and may go high (e.g., positive) when the camera A 402 begins reading out a captured image. The VSYNC signal may be communicated on the connection CAM_A_VS 454 to the first imager interface 432. Once the last pixel of the last row of a captured image frame has been read out by the camera A 402, the VSYNC signal may go low again to indicate a complete image frame has been read-out. The low VSYNC signal on the connection CAM_A_VS 454 to the first imager interface 432 is passed to the output 422 of the switching logic and received at the input 424 of the detection logic 412. The detection logic 412 detects the low VSYNC signal and may provide on the output 428 of the detection logic 412 a selection signal to the switching logic 410 whether to change the presently selected imager interface. The selection signal may be communicated from the detection logic 412 to the switching logic 410 on a connection CAM_A_nCAM_B 470. The detection logic 412 may perform the same or a similar monitoring of a VSYNC signal of camera B 404 when the presently selected imager interface is the second imager interface 434.
The processor interface 436 passes the output 422 of the switching logic 410 to the processor 408. In the illustrated embodiment of
In the foregoing manner, the dynamic intelligent imager switch 406 may produce a clean and continuous stream of image data that comprises images captured by the camera A 402 and the camera B 404 and present the continuous stream of image data to the processor 408 as if all the images originated from a single imager. Thus, the dynamic intelligent imager switch 406 enables use of a single processor 408 in the illustrated data reader 400 with two imagers, namely, the camera A 402 and the camera B 404. The processor 408 may have a single imager interface, and may be programmed or otherwise configured to interact with a single imager (e.g., one of the camera A 402 or the camera B 404), but the dynamic intelligent imager switch 406 allows the processor 408 to be utilized in the same method and manner that it would be used were it coupled to a single imager. In other words, a processor 408 that is designed and/or configured for use with one of the camera A 402 and/or the camera B 404 in an imaging reader application (e.g., system, scanner) may now be used, unchanged and without reconfiguration, to process image data received from both of the camera A 402 and the camera B 404.
The detection logic 412 may further include, or be coupled to, biasing logic 480 that can be configured to impose a pre-defined bias on the switching order, as described above. The biasing logic 480 may include one or more counters. A first counter may count to ensure that a given number of complete image frames are passed to the processor 408 from the first imager interface 432, for example, before switching the presently selected imager interface from the first imager interface 432 to the second imager interface 434. A second counter may count to ensure that a given number of complete image frames are passed to the processor 408 from the second imager interface 434, for example, before switching the presently selected imager interface from the second imager interface 434 to the first imager interface 432. For example, it may be desirable to process three images captured by the camera A 402 for every two images that are processed from the camera B 404. Accordingly, a bias of 3-to-2 could be pre-defined or configured in the biasing logic 480. The biasing logic 350 (see
In another embodiment, a dynamic intelligent imager switch may include additional features. By way of example, and not limitation, the dynamic intelligent imager switch may further include translation logic that would enable a plurality of imagers having a first type of interface (e.g., the parallel interface depicted in
An example embodiment may be a dynamic intelligent imager switch that includes a plurality of imager interfaces, a processor interface, switching logic, and detection logic. The plurality of imager interfaces are operative to couple to a plurality of imagers. The plurality of imagers are configured to capture image data of a scene in a field of view (FOV) of the imager and present the captured image data as an output. The processor interface couples to an imager interface of a processing device. The processing device is configured to process image data captured by the plurality of imagers and identify and decode an optical code within the image data. The switching logic forwards, to the processor interface, image data received at a presently selected imager interface of the image data switching logic. The detection logic is configured to detect that a complete image frame is received at the presently selected imager interface and forwarded to the processor interface, before automatically switching the presently selected imager interface of the switching logic from a first imager interface of the plurality of imager interfaces to a second imager interface of the plurality of imager interfaces.
An imager request signal on the connection CAM_A_REQ 458 and an imager request signal on the connection CAM_B_REQ 468 are provided at time ti to indicate to the camera A 402 and the camera B 404 that an image is being requested (e.g., by the processor 408 or the dynamic intelligent imager switch 406). The request signals are shown in the illustrated embodiment by a rising edge (i.e., the transition from low to high) and, shortly thereafter, a falling edge (i.e., the transition from high to low), but, as can be appreciated, other signal patterns are possible. Furthermore, although the request signals on connection CAM_A_REQ 458 and connection CAM_B_REQ 468 are coincident at time t1, they may also be non-coincident so as to ensure alignment of imager dead times, as appropriate. These request signals start the imagers sending image data, which is signified by a rising edge of the VSYNC signals on the connection CAM_A_VS 454 and on the connection CAM_B_VS 464. A slight phase delay between the two imagers (including their VSYNC signals) is shown to illustrate which of the camera's data is being passed to the output 422 of the switching logic 410 and/or the processor interface 436. Specifically, the VSYNC signal of the camera A 402 on the connection CAM_A_VS 454 has a rising edge at time t2a and the VSYNC signal of the camera B 404 on the connection CAM_B_VS 464 has a rising edge at time t2b. Because the switching logic 410 is set to select the data received at the second imager interface 434, which is coupled to the camera B, the VSYNC signal on the connection CAM_B_VS 464 also appears at time t2b on the connection CAM_OUT_VS 474.
With the VSYNC signals high, the camera A 402 and the camera B 404 begin to transfer (read out) image data. The transfer of image data involves HSYNC signals to indicate a beginning of a transfer (read out) of a row of pixels. Specifically, an HSYNC signal on the connection CAM_A_HS 456 has a rising edge at time t3a, and corresponds to image data being transferred (read out) on the connection CAM_A_DATA 452 at time t3a. An HSYNC signal on the connection CAM_B_HS 466 has a rising edge at time t3b and corresponds to image data being transferred (read out) on the connection CAM_B_DATA 462 at time t3b. Again, because the switching logic 410 is set to select the data received at the second imager interface 434, which is coupled to the camera B 404, the HSYNC signal on the connection CAM_B_HS 466 also appears at time t3b on the connection CAM_OUT_HS 476 and the image data being read out also appears on the connection CAM_OUT_DATA 472 at time t3b.
For ease of description, in the timing diagram 500 of
After all rows of pixels of an image frame have been transferred (or read out), the VSYNC signal falls to indicate the most recently captured image frame has been completely read out. In
In the illustrated example of
A high signal on the connection CAM_A_nCAM_B 470 indicates that the selection signal received at the selection input 424 of the switching logic 410 is now set to select the first imager interface 432, which is coupled to the camera A 402. In other words, at time t5 the presently selected imager interface becomes the first imager interface 432 and data from the camera A 402 is now being passed by the switching logic 410 to the processor interface 436. Also, data from the camera A 402 is now being received at the input 426 of the detection logic 412.
Another imager request signal on the connection CAM_A_REQ 458 and another imager request signal on the connection CAM_B_REQ 468 are provided at time t6 to indicate to the camera A 402 and the camera B 404 that an image is being requested (e.g., by the processor 408 or the dynamic intelligent imager switch 406). These request signals prompt the cameras 402, 404 to begin sending image data, which is signified by a rising edge of the VSYNC signals on the connection CAM_A_VS 454 and on the connection CAM_B_VS 464. Again, a slight phase delay between the two imagers (including their VSYNC signals) is shown to illustrate which camera's data is being passed to the output 422 of the switching logic 410 and/or the processor interface 436. Specifically, the VSYNC signal of the camera A 402 on the connection CAM_A_VS 454 has a rising edge at time t7a and the VSYNC signal of the camera B 404 on the connection CAM_B_VS 464 has a rising edge at time t7b. Because the switching logic 410 is set to select the data received at the first imager interface 432, which is coupled to the camera A 402, the VSYNC signal on the connection CAM_A_VS 454 also appears at time t7a on the connection CAM_OUT_VS 474.
With the VSYNC signals high, the camera A 402 and the camera B 404 begin to transfer (read out) image data. As before, the transfer of image data involves the HSYNC signals to indicate a beginning of a transfer (read out) of a row of pixels. The HSYNC signal on the connection CAM_A_HS 456 has a rising edge at time t8, and corresponds to image data being transferred (read out) on the connection CAM_A_DATA 452 at time t8a. The HSYNC signal on the connection CAM_B_HS 466 has a rising edge at time t8b and corresponds to image data being transferred (read out) on the connection CAM_B_DATA 462 at time t8b. Again, because the switching logic 410 is set to select the data received at the first imager interface 432, which is coupled to the camera A 402, the HSYNC signal on the connection CAM_A_HS 456 also appears at time 6 on the connection CAM_OUT_HS 476 and the image data being read out also appears on the connection CAM_OUT_DATA 472 at time 6. Again, for ease of description, only three rows of data are shown as being transferred.
After all rows of pixels of an image frame have been transferred (or read out) from the camera A 402, the VSYNC signal falls to indicate the most recently captured image frame has been completely read out. In
In the illustrated example of
The low signal on the connection CAM_A_nCAM_B 470 indicates that the selection signal received at the selection input 424 of the switching logic 410 is again set to select the second imager interface 434, which is coupled to the camera B 404. In other words, at time t10 the presently selected imager interface again becomes the second imager interface 434 and data from the camera B 404 is passed by the switching logic 410 to the processor interface 436 and received at the input 426 of the detection logic 412.
In the illustrated timing diagram 500 of
The data transfer on each of the connection CAM_A_DATA 452, the connection CAM_B_DATA 462, and the connection CAM_OUT_DATA 472 is depicted using the notation “XXXX.” This notation may refer to any number of individual bits and/or any suitable number of pixels. A pixel may comprise any suitable number of individual bits, and thus a serial transfer of a pixel may nevertheless involve parallel transfer of a plurality of bits. The individual bits may be transferred serially, or may be transferred in parallel sets, or entirely in parallel. Also, the pixels may be transferred serially or in parallel with any combination of other pixels.
In one embodiment, a complete image frame may be 1000×1000 pixels. Each of the thousand rows is communicated on a single HSYNC signal, as shown in
In other embodiments, the pixels may be communicated in other ways. In one embodiment, each pixel may be transferred individually in a serial fashion, as described above. Alternatively, each pixel of each row may be transferred in parallel with any combination of other number of pixels, such as one hundred sets of ten pixels (e.g. one hundred bits at a time if pixels are ten bits) in parallel, fifty sets of twenty pixels (e.g., two hundred bits at a time if pixels are ten bits) in parallel, etc.
Furthermore, the interfaces and the individual connections may further include additional data not shown. For example, a pixel clock signal may be communicated on a connection between the imagers 402, 404, the dynamic intelligent imager switch 406, and the processor 408. The pixel clock signal may indicate when an individual pixel is transferred, if transferred serially, or may indicate when a parallel set of pixels is transferred.
A first counter CAM_A_CNT 602 may count to ensure that a given number of complete image frames are passed to the processor 408 from the first imager interface 432 (which is coupled to the camera A 402) before switching the presently selected imager interface from the first imager interface 432 to the second imager interface 434. In the illustrated scenario of
Initially the connection CAM_A_nCAM_B 470 is set high, which causes the switching logic 410 to pass the data from the camera A 402 to the output CAM_OUT 610. Each time a complete image frame is received, the first counter CAM_A_CNT 602 is decremented, while the second counter CAM_B_CNT 604 remains at two. When the first counter CAM_A_CNT 602 hits zero, the detection logic changes the signal on the connection CAM_A_nCAM_B 470 from high to low, which causes the switching logic 410 to pass the data from the camera B 404 to the output CAM_OUT 610. When a complete image frame is received, the second counter CAM_B_CNT 604 is now decremented. When both counters 602, 604 are at zero, they are reloaded with the values provided on the connection CAM_ACNT_RLOAD 606 and the connection CAM_B_CNT_RLOAD 608, respectively, and the process repeats.
In this manner, the biasing logic ensures that a given number of complete image frames are passed to the processor 408 from the presently selected imager before switching the presently selected imager. Dynamic modification of the signals on the connection CAM_A_CNT_RLOAD 606 and the connection CAM B CNT RLOAD 608 may enable the bias to be updated or modified dynamically based on a load or need (e.g., a change in an operator/checker having different scanning habits than a previous operator/checker).
If the image remaining count is equal to zero upon a check (no from step 702), the image remaining count may be reloaded (step 712) with a pre-defined count according to the desired bias. The presently selected imager is also changed (step 714) to a different imager. An imager request signal can be provided (step 710) to the imagers to request a new image capture. The method 700 is then repeated with the new presently selected imager and based on its corresponding image remaining count.
In another embodiment, the image remaining count of all counters may be reloaded (step 712) with their respective pre-load amounts after all counters, or a plurality of counters, reach zero and/or contemporaneously with the change (step 714) to a different imager.
Other embodiments are envisioned. Although the description above contains certain specific details, these details should not be construed as limiting the scope of the invention, but as merely providing illustrations of some embodiments/examples. It should be understood that subject matter disclosed in one portion herein can be combined with the subject matter of one or more of other portions herein as long as such combinations are not mutually exclusive or inoperable.
The terms and descriptions used herein are set forth by way of illustration only and not meant as limitations. It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention(s). The scope of the present invention should, therefore, be determined only by the following claims.
This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application No. 61/658,250, filed Jun. 11, 2012, and titled DYNAMIC IMAGER SWITCHING, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
61658250 | Jun 2012 | US |