The present application generally relates to systems and methods for selecting a portion of an image captured by an image capture device.
Many mobile computing devices (e.g., scanners, PDAs, mobile phones, laptops, mp3 players, etc.) include digital cameras to extend their functionalities. For example, an imager-based barcode reader may utilize a digital camera for capturing images of barcodes, which come in various forms, such as parallel lines, patterns of dots, concentric circles, hidden images, etc.), both one dimensional (1D) and two dimensional (2D).
The imager-based barcode reader typically provides a display screen which presents a preview of an imaging field of the imager. Thus, a user may visually confirm that a barcode will be included in an image generated by the imager. Even though conventional decoders can locate and decode bar codes regardless of location within the image, users typically think that the barcode must be centered within the image for the barcode to be decoded properly. In addition, users typically think that the barcode must be large within the image to be decoded properly, and, as a result, place the imager-based barcode reader extremely close to the barcode. However, the conventional decoders can decode barcodes that are relatively small within the image. Therefore, between orienting the barcode in the display and manually zooming, capturing the image may prove to be unnecessarily time consuming.
The present invention relates to a system and method for selecting a portion of an image. The method comprises obtaining a first image by an image capture device, analyzing the first image to detect at least one predetermined object therein, generating a second image as a function of the first image, the second image including at least one portion of the first image, the at least one portion including the at least one predetermined object, selecting one of the portions and performing a predetermined operation on the selected portion.
a illustrates an exemplary embodiment of an image capture device capturing multiple images according to the present invention.
b illustrates an exemplary embodiment of a preview image generated by an image capture device according to the present invention.
a illustrates an exemplary embodiment of a summary image generated by an image capture device according to the present invention.
b illustrates another exemplary embodiment of a summary image generated by an image capture device according to the present invention.
a illustrates another exemplary embodiment of a preview image generated by an image capture device according to the present invention.
b illustrates another exemplary embodiment of a summary image generated by an image capture device according to the present invention.
a illustrates a further exemplary embodiment of a preview image generated by an image capture device according to the present invention.
b illustrates an exemplary embodiment of a position determining function according to the present invention.
c illustrates another exemplary embodiment of a position determining function according to the present invention.
The present invention may be further understood with reference to the following description and appended drawings, wherein like elements are provided with the same reference numerals. The exemplary embodiments of the present invention describe a system and method for selecting a portion of an image captured by an image capture device. In the exemplary embodiment, the image capture device detects a predetermined object (e.g., barcodes, signatures, shipping labels, dataforms, etc.) in the image and allows a user to select one or more of the items for additional processing, as will be described below.
The processor 116 may comprise a central processing unit (CPU) or other processing arrangement (e.g., a field programmable gate array) for executing instructions stored in the memory 118 and controlling operation of other components of the device 100. The memory 118 may be implemented as any combination of volatile memory, non-volatile memory and/or rewritable memory, such as, for example, Random Access Memory (RAM), Read Only Memory (ROM) and/or flash memory. The memory 118 stores instructions used to operate and data generated by the device 100. For example, the memory 118 may comprise an operating system and a signal processing method (e.g., image capture method, image decoding method, etc.). The memory 118 may also store image data corresponding to images previously captured by the imaging arrangement 112.
The imaging arrangement 112 (e.g., a digital camera) may be used to capture an image (monochrome and/or color). The output arrangement 114 (e.g., a liquid crystal display, a projection display, etc.) may be used to view a preview of the image prior to capture and/or play back of previously captured images. The preview outputted on the output arrangement 114 may be updated in real-time, providing visual confirmation to a user that an image captured by the imaging arrangement 112 would include the item of interest, e.g., a predetermined object. The imaging arrangement 112 may be activated by signals received from a user input arrangement (not shown) such as, for example, a keypad, a keyboard, a touch screen, a trigger, a track wheel, a spatial orientation sensor, an accelerometer, a MEMS sensor, a microphone and a mouse.
In step 204, the processor 116 analyzes the preview image 300 to detect the predetermined object(s) therein. For example, in the exemplary embodiment, the processor 116 may be configured to detect decodable dataforms. Thus, the processor 116 detects the three barcodes 505 in the preview image 300 and ignores any portion of the preview image 300 which does not include decodable dataforms. Those of skill in the art will understand that the processor 116 may be configured to detect any predetermined object in the preview image 300 including, but not limited to, barcodes, shipping labels, signatures, etc. In another exemplary embodiment, the processor 116 may generate and analyze the preview image 300 in the background, without displaying the preview image 300 on the output arrangement 114. Thus, the processor 116 may continually generate and analyze successive preview image to identify the predetermined objects therein.
In step 206, the processor 116 generates a summary image 400 comprising the predetermined object(s) detected in the preview image 300 and displays the summary image 400 on the output arrangement 114.
In another exemplary embodiment, as shown in
As understood by those of skill in the art, when the processor 116 only detects a single predetermined object in the preview image 300, the object may be rotated, centered and/or enlarged in the summary image 400. For example, as shown in
In step 208, one or more of the predetermined objects in the summary image 400 is selected. In the exemplary embodiment, a selector may be shown on the output arrangement 114 and movable between the predetermined objects. For example, the selector may be a cursor, highlight, crosshair, etc. which the user can movably position over the predetermined objects using a second user input, e.g., a keystroke, a tactile input a gesture input, a voice command, trigger squeeze or other user interface action. Those of skill in the art will understand that when the summary image 400 only includes a single predetermined object, the step 208 may be eliminated from the method 200. In another exemplary embodiment, the processor 116 may select one or more the predetermined objects automatically. That is, the processor 116 may be configured/defaulted to select a predetermined type of the predetermined objects. For example, the processor 116 may identify a UPC barcode and a EAN barcode on the item 505, but be configured to select only the UPC barcode for decoding.
In another exemplary embodiment, the processor 116 may detect properties, positions, etc. of the predetermined objects and position the selector over a selected one of the objects based thereon. For example, as shown in
In step 210, the processor 116 determines whether the selected predetermined object(s) should be captured. In the exemplary embodiment, the processor 116 may detect a third user input indicative of the user's desire to capture the selected predetermined object(s). An exemplary image preview, selection and capture process may be conducted as follows: the user may squeeze and release a trigger on the device 100 once to generate the summary image 400. A second squeeze of the trigger moves the selector over the predetermined objects shown in the summary image 400, and a third squeeze of the trigger selects and captures the image of the predetermined object. If the processor 116 does not detect the third user input, the user may continue to move the selector over the predetermined objects.
In step 212, the processor 116 detects the third user input, captures the preview image or a selected portion thereof which includes the predetermined object and processes the captured image. The processing may include storing the captured image in memory, inputting the captured image into a decoder and/or another image processing element/algorithm, etc. For example, when the captured image includes a decodable dataform, the captured image may be decoded to reveal data encoded in the dataform.
An advantage of the present invention is that it allows a device with an imaging device to provide optimal scanning performance without projecting a targeting pattern onto an object to be captured. This may conserve power for the device. Another advantage of the present invention providing faster image capture and faster decoding and may lower costs by eliminating wasted time due to manually reorienting the device to obtain an enlarged, rotated, centered, etc. view of the object.
The present invention has been described with reference to the above exemplary embodiments. One skilled in the art would understand that the present invention may also be successfully implemented if modified. Accordingly, various modifications and changes may be made to the embodiments without departing from the broadest spirit and scope of the present invention as set forth in the claims that follow. The specification and drawings, accordingly, should be regarded in an illustrative rather than restrictive sense.