Some types of image processing, such as high dynamic range (HDR) image processing, involves combining one camera's sequential still image output (e.g., each with differing exposure) into a single still image with a higher dynamic range (i.e., an image with a larger range of luminance variation between light and dark image areas). This approach is often called exposure bracketing and can be found in conventional cameras.
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
This disclosure pertains to a device, method, computer useable medium, and processor programmed to automatically utilize simultaneous image captures in an image processing pipeline in a digital camera or digital video camera. One of ordinary skill in the art would recognize that the techniques disclosed may also be applied to other contexts and applications as well.
For cameras in embedded devices, e.g., digital cameras, digital video cameras, mobile phones, personal data assistants (PDAs), tablets, portable music players, and desktop or laptop computers, to produce more visually pleasing images, techniques such as those disclosed herein can improve image quality without incurring significant computational overhead or power costs.
To acquire image data, a digital imaging device may include an image sensor that provides a number of light-detecting elements (e.g., photodetectors) configured to convert light detected by the image sensor into an electrical signal. An image sensor may also include a color filter array that filters light captured by the image sensor to capture color information. The image data captured by the image sensor may then be processed by an image processing pipeline circuitry, which may apply a number of various image processing operations to the image data to generate a full color image that may be displayed for viewing on a display device, such as a monitor.
Conventional image processes, such as conventional high dynamic range (HDR) image processing requires multiple images to be captured sequentially and then combined to yield an HDR with enhanced image characteristics. In conventional HDR image processing, multiple images are captured sequentially by a single image sensor at different exposures and are combined to produce a single image with higher dynamic range than possible with capture of a single image. For example, capture of an outdoor night time shot with a neon sign might result in either over-exposure of the neon sign or under-exposure of the other portions of the scene. However, capturing both an over-exposed image and an under-exposed image and combining the multiple images can yield an HDR image with both adequate exposure for both the sign and the scene. This approach is often called exposure bracketing, but a requirement is that the images captured must be substantially similar even though taken sequentially to prevent substantial introduction of blurring or ghosting.
Embodiments of the present disclosure provide enhanced image processing by utilizing multiple images that are captured simultaneously. Referring to
One prospective use of an imaging device 150 with multiple cameras or image sensors would be to increase the number of dimensions represented in a displayed image. An example of this type of functionality is a stereoscopic camera which typically has two cameras (e.g., two image sensors). Embodiments of the present disclosure, however, may have more than two cameras or image sensors. Further, embodiments of an imaging device 150 may have modes of operation such that one mode may allow for the imaging device 150 to capture a 2-dimensional (2D) image; a second mode may allow for the imaging device to capture a multi-dimensional image (e.g., 3D image), and a third mode may allow the imaging device to simultaneously capture multiple images and use them to produce one or more 2D enhanced images for which an image processing effect has been applied. Accordingly, some embodiments of the present disclosure encompass a configurable and adaptable multi-imager camera architecture which operates in either a stereoscopic (3D) mode, monoscopic (single imager 2D) mode, and a combinational monoscopic (multiple imager 2D) mode. In one embodiment, mode configuration involves user selection, while adaptation can be automatic or prompted mode operation. For example, monoscopic mode may be used in normally sufficient situations but switched to combinational monoscopic operations when the need is detected by control logic 105.
In some embodiments, the image processing circuitry 100 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs (application-specific integrated circuits)) or software, or via a combination of hardware and software components. The various image processing operations may be provided by the image processing circuitry 100.
The image processing circuitry 100 may include front-end processing logic 103, pipeline processing logic 104, and control logic 105, among others. The image sensor(s) 101 may include a color filter array (e.g., a Bayer filter) and may thus provide both light intensity and wavelength information captured by each imaging pixel of the image sensors 101 to provide for a set of raw image data that may be processed by the front-end processing logic 103.
The front-end processing logic 103 may receive pixel data from memory 108. For instance, the raw pixel data may be sent to memory 108 from the image sensor 101. The raw pixel data residing in the memory 108 may then be provided to the front-end processing logic 103 for processing.
Upon receiving the raw image data (from image sensor 101 or from memory 108), the front-end processing logic 103 may perform one or more image processing operations. The processed image data may then be provided to the pipeline processing logic 104 for additional processing prior to being displayed (e.g., on display device 106), or may be sent to the memory 108. The pipeline processing logic 104 receives the “front-end” processed data, either directly from the front-end processing logic 103 or from memory 108, and may provide for additional processing of the image data in the raw domain, as well as in the RGB and YCbCr color spaces, as the case may be. Image data processed by the pipeline processing logic 104 may then be output to the display 106 (or viewfinder) for viewing by a user and/or may be further processed by a graphics engine. Additionally, output from the pipeline processing logic 104 may be sent to memory 108 and the display 106 may read the image data from memory 108. Further, in some implementations, the pipeline processing logic 104 may also include an encoder 107, such as a compression engine, etc., for encoding the image data prior to being read by the display 106.
The encoder 107 may be a JPEG (Joint Photographic Experts Group) compression engine for encoding still images, or an H.264 compression engine for encoding video images, or some combination thereof. Also, it should be noted that the pipeline processing logic 104 may also receive raw image data from the memory 108.
The control logic 105 may include a processor 620 (
Referring now to
In one embodiment, the first process element 201 of an image signal processing pipeline could perform a particular image process such as noise reduction, defective pixel detection/correction, lens shading correction, lens distortion correction, demosaicing, image sharpening, color uniformity, RGB (red, green, blue) contrast, saturation boost process, etc. As discussed above, the pipeline may include a second process element 202. In one embodiment, the second process element 202 could perform a particular and different image process such as noise reduction, defective pixel detection/correction, lens shading correction, demosaicing, image sharpening, color uniformity, RGB contrast, saturation boost process etc. The image data may then be sent to additional element(s) of the pipeline as the case may be, saved to memory 108, and/or input for display 106.
In one embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced high dynamic range process. A mode of operation for the enhanced high dynamic range process causes simultaneous images to be captured by image sensors 101. By taking multiple images simultaneously, the multiple pictures the object being photographed will be captured at the same time in each image. Under the mode of operation for the enhanced high dynamic range process, multiple images are to be captured at different exposure levels (e.g., different gain settings) or some other characteristic and then be combined to produce an image having an enhanced range for the particular characteristic. For example, an enhanced image may be produced with one portion having low exposure, another portion having a medium exposure, and another portion having a high exposure, depending on the number of images that have been simultaneously captured. In a different scenario, simultaneous images may be captured for different focus levels.
In another embodiment, a different image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced autofocusing process that can be utilized in many contexts including enhanced continuous autofocusing. A mode of operation for the enhanced high dynamic range process causes simultaneous images to be captured by image sensors 101. One of the image sensors 101 (in an assistive role) may be caused to focus on an object and then scan an entire focusing range to find an optimum focus for the first image sensor. The optimum focus range is then used by a primary image sensor to capture an image of the object. In one scenario, the primary image sensor 101 may be capturing video of the object or a scene involving the object. Accordingly, the optimum focus range attributed to the second or assistive image sensor 101 may change as the scene changes and therefore, the focus used by the primary image sensor 101 may be adjusted as the video is captured.
In an additional embodiment, an image process performed by a process element in the image signal processing pipeline is an enhanced depth of field process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101. Focusing of the image sensors 101 may be independently controlled by control logic 105. Accordingly, one image sensor may be focused or zoomed closely on an object in a scene and a second image sensor may be focused at a different level on a different aspect of the scene. Image processing in the image single processing pipeline may then take the captured images and combine them to produce an enhanced image with a greater depth of field. Accordingly, multiple images may be combined to effectively extend the depth of field. Also, some embodiments may utilize images from more than two imagers or image sensors 101.
In various embodiments, multiple image sensors 101 may not be focused on a same object in a scene. For example, an order may be applied to the image sensors 101 or imagers, where a primary imager captures a scene and secondary camera captures scene at a different angle or different exposure, different gain, etc., where the second image is used to correct or enhance the primary image. Exemplary operations include, but are not limited to including, HDR capture and enhanced denoise operations by using one frame to help denoise the other, as one example. To illustrate, in one implementation, a scene captured in two simultaneous images may be enhanced by averaging the values of pixels for both images which will improve the signal-to-noise ratio for the captured scene. Also, by having multiple images captured simultaneously at different angles, a curve of the lens shading may be calculated (using the location difference of the same object(s) in the image captures between the two (or more) image sensors) and used to correct effected pixels.
Accordingly, in an additional embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is a corrective process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101. The lens of the respective imagers 101 may have different angles of views. Therefore, in the image process, images captured at the different angles of views may be compared to determine a difference in the two images. For example, defective hardware or equipment may cause a defect to be visible in a captured image. Therefore, the defect in captured images from multiple image sensors 101 is not going to be in the same position in both views/images due to the different angles of view. There will be a small difference, and the image signal processing pipeline is able to differentiate between the defect from the real image and apply some form of correction.
In an additional embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced image resolution process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101 at a particular resolution (e.g., 10 Megapixels). Image processing in the image single processing pipeline may then take the captured images and combine them to produce an enhanced image with an increased or super resolution (e.g., 20 Megapixels). Further, in some embodiments, one of the captured images may be used to improve another captured image and vice versa. Accordingly, multiple enhanced monoscopic images may be produced from the simultaneous capture of images.
In an additional embodiment, an image process performed by a process element in the image signal processing pipeline is an enhanced image resolution process. A mode of operation for the enhanced process causes simultaneous video streams of images to be captured by image sensors 101 during low lighting conditions.
Consider that camera image quality often suffers during low light conditions. Ambient lighting is often low and not adequate for image sensor arrays designed for adequate lighting conditions. Thus, such sensor arrays receive insufficient photons to capture images with good exposure leading to dark images. Attempting to correct this via analog or digital gain may help somewhat but also tends to over amplify underlying noise (which is more dominant in low lighting conditions). One possible solution is to extend exposure time, but this may not be feasible as hand shaking may introduce blurring. Another conventional solution is to add larger aperture lensing and external flash. The former is a very expensive and size consuming proposition, while the latter may not be allowed (such as in museums) or may not be effective (such as for distance shots). Flash systems also are also costly and consume a lot of power.
Select embodiments of the present disclosure utilize a combination of different image sensors 101 (e.g., infrared, RGB, panchromatic, etc.). For example, one image sensor may advantageously compensate for image information not provided by the other image sensor and vice versa. Accordingly, the image sensors may capture images simultaneously where a majority of image information is obtained from a primary image sensor and additional image information is provided from additional image sensor(s), as needed.
In one embodiment, low light image sensors 101 or panchromatic image sensors 101 in concert with a standard RGB (Bayer pattern) image sensor array are used. Panchromatic sensors receive up to three times the photons of a single RGB sensor due to having a smaller imager die size, but rely on the RGB neighbors for color identification. Such sensor array design is outperformed by an ordinary RGB sensor at higher lighting levels due to the larger image die size. One embodiment of an imaging device 150 utilizes a RGB type CMOS or CCD type sensor array for high lighting situations, and a second low light type of sensor designed for low lighting conditions (e.g., fully panchromatic—black and white luma only, or interspersed panchromatic). Then, the imaging device 150 automatically switches between the two sensors to best capture images under current lighting conditions. Further, in one embodiment, simultaneous images may be captured during low lighting. In particular, by capturing multiple images using a panchromatic imager 101 and a normal lighting imager 101, the captured images can be correlated and combined to produce a more vivid low light image.
As an example, a panchromatic image sensor 101 may be used to capture a video stream at a higher frame rate under low lighting conditions while the chroma data is only sampled at half that rate. This corresponds to a temporal compression approach counterpart to a spatial approach that treats chroma with a lesser resolution than luma. Output of the process element 201, 202 may be a single frame sequence or may actually comprise two separate streams for post processing access.
In another scenario, motion blur can be reduced using the panchromatic imager 101 and a normal lighting imager 101. Motion blur is when an object is moving in front of the imaging device 150 and in a low light condition, for example, a chosen exposure for the low light condition may capture motion of an object being shot or of shaking of the imaging device 150 itself. Accordingly, the panchromatic imager is used to capture an image at a smaller exposure than a second image is captured by the normal lighting imager. The captured images can be correlated and combined to produce an image with motion blur corrected.
Embodiments of the imaging device 150 are not limited to having two image sensors and can be applied to a wide number of image sensors 101. For example, a tablet device could possibly have two imagers in the front and two imagers in the back of the device, where images (including video) from each of the imagers are simultaneously captured and combined into a resulting image.
Referring next to
Referring to
In some embodiments, the images generated by the first and second paths may be stored in memory 108 and made available for subsequent use by other procedures and elements that follow. Accordingly, in one embodiment, while a main image is being processed in a main path of the pipeline, another image which might be downsized or scaled of that image or a previous image may be read by the main path. This may enable more powerful processing in the pipeline, such as during noise filtering.
Also, in some embodiments, similar pixels in the multiple images may be processed once and then disparate pixels will be processed separately. It is noted that simultaneous capturing of images from two image sensors in close proximity with one another will be quite similar. Therefore, pixels of a first captured image may be processed in a main path of the pipeline. Additionally, similar pixels in a second captured image may be identified with a similarity mask, where the similar pixels are also contained in the first captured image (and are already being processed). After removal of the similar pixels in the second captured image, the remaining pixels may be processed in a secondary path of the pipeline. By removing redundant processing, significant power savings in the image signal processing pipeline may be realized.
Further, in some embodiments, the images generated by the first and second paths may be simultaneously displayed. For example, one display portion of a display 106 can be used to show a video (e.g., outputted from the first path) and a second display portion of the display 106 can be used to show a still image or “snap-shot” from the video (e.g., outputted from the second path) which is responsive to a pause button on an interface of the imaging device 150. Alternatively, an image frame may be shown in a split screen of the display (e.g., left section) and another image frame may be shown in a right section of the display. The imaging device may be configured to allow for a user to select a combination of frames (e.g., the frames being displayed in the split screen) and then compared and combined by processing logic 103, 104, 105 to generate an enhanced image having improved image quality and resolution.
As previously mentioned, embodiments of the imaging device 150 may employ modes of operation that are selectable from interface elements of the device. Interface elements may include graphical interface elements selectable from a display 106 or mechanical buttons or switches selectable or switchable from a housing of the imaging device 150. In one embodiment, a user may activate a stereoscopic mode of operation, in which processing logic 103, 104, 105 of the imaging device 150 produces a 3D image, using captured images, that is viewable on the display 106 or capable of being saved in memory 108. The user may also activate a 2D mode of operation, where a single image is captured and displayed or saved in memory 108. Further, the user may activate an enhanced 2D mode of operation, where multiple images are captured and used to produce a 2D image with enhanced characteristics (e.g., improved depth of field, enhanced focus, HDR, super-resolution, etc.) that may be viewed or saved in memory 108.
In processing an image, binning allows charges from adjacent pixels to be combined which can provide improved signal-to-noise ratios albeit at the expense of reduced spatial resolution. In various embodiments, different binning levels can be used in each of the multiple image sensors. Therefore, better resolution may be obtained from the image sensor having the lower binning level and better signal-to-noise ratio may be obtained from the image sensor having the higher binning level. The two versions of a captured scene or image may then be combined to produce an enhanced version of the image.
In particular, in one embodiment, multiple image sensors 101 capture multiple images, each with different exposure levels. A process element 201, 202 of an image signaling processing pipeline correlates and performs high dynamic range processing on different combinations of the captured images. The resulting images from the different combinations may be displayed to a user and offered for selection by the user as to the desired final image which may be saved and/or displayed. In some embodiments, a graphical interface slide-bar (or other user interface control element) may also be presented that allows gradual or stepwise shifting providing differing weighting combinations between images having different exposures. For video, such setting may be maintained across all frames.
Multiplexing of the image signal processing pipeline is also implemented in an embodiment utilizing multiple image sensors 101. For example, consider a stereoscopic imaging device (e.g., one embodiment of imaging device 150) that delivers a left image and a right image of an object to a single image signal processing pipeline, as represented in
Therefore, instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing logic 103. This reduces latency by not delaying processing of an image until completion of the other image, and processing of the two images will finish more quickly.
Keeping the above points in mind,
Regardless of its form (e.g., portable or non-portable), it should be understood that the electronic device 650 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, among others. In some embodiments, the electronic device 650 may apply such image processing techniques to image data stored in a memory of the electronic device 650. In further embodiments, the electronic device 650 may include multiple imaging devices, such as an integrated or external digital camera or imager 101, configured to acquire image data, which may then be processed by the electronic device 650 using one or more of the above-mentioned image processing techniques.
As shown in
Before continuing, it should be understood that the system block diagram of the device 605 shown in
Referring next to
Beginning in step 702, control logic 105 triggers or initiates simultaneous capture of multiple images from image sensors 101, where the multiple images include at least a first image and a second image. The first image contains an imaging characteristic or setting that is different from an imaging characteristic of the second image. Possible imaging characteristics include exposure levels, focus levels, depth of field settings, angle of views, etc. In step 704, processing logic 103, 104 combines at least the first and second images or portions of the first and second images to produce an enhanced image having qualities of the first and second images. The enhanced image, as an example, may contain portions having depths of field from the first and second images, exposure levels from the first and second images, combined resolutions of the first and second images, etc. The enhanced image is output from an image signal processing pipeline of the processing logic and is provided for display, in step 706.
Next, referring to
In
Correspondingly, in step 904, control logic 105 activates a 2D or monoscopic mode of operation for the imaging device 150, where a single image is captured and displayed or saved in memory 108. In one embodiment, a user may generate a command for the control logic 105 to activate the 2D mode of operation. In an alternative embodiment, the control logic 105 may be configured to automatically activate the 2D mode of operation without user prompting.
Further, in step 906, control logic 105 activates an enhanced 2D or monoscopic mode of operation for the imaging device 150, where multiple images are captured and used to produce a 2D image with enhanced characteristics (e.g., improved depth of field, enhanced focus, HDR, super-resolution, etc.) that may be viewed or saved in memory 108. Additionally, in various embodiments, one of the outputs of the image processing may not be an enhanced image and may be image information, such as depth of field information, for the enhanced image. In one embodiment, a user may generate a command for the control logic 105 to activate the enhanced 2D mode of operation. In an alternative embodiment, the control logic 105 may be configured to automatically activate the enhanced 2D mode of operation without user prompting.
Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of embodiments of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art.
In the context of this document, a “computer readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of certain embodiments includes embodying the functionality of the embodiments in logic embodied in hardware or software-configured mediums.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims priority to copending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference. This application is related to copending U.S. utility patent application entitled “Multiple Image Processing” filed Sep. 19, 2011 and accorded Ser. No. 13/235,975, which is entirely incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7086735 | Provitola | Aug 2006 | B1 |
20050128323 | Choi | Jun 2005 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080218611 | Parulski et al. | Sep 2008 | A1 |
20100238327 | Griffith et al. | Sep 2010 | A1 |
20130335535 | Kane et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
2569474 | Aug 2003 | CN |
101365071 | Feb 2009 | CN |
101496415 | Jul 2009 | CN |
2010-521102 | Jun 2010 | JP |
10-2009-0033487 | Apr 2009 | KR |
10-2009-0088435 | Aug 2009 | KR |
200937344 | Sep 2009 | TW |
2006079963 | Aug 2006 | WO |
Entry |
---|
Korean Office Action in co-pending related Korean Application No. 10-2012-0078610 mailed Aug. 20, 2013. |
European Search Report in co-pending related EP Application No. 12 00 4966 mailed Aug. 21, 2013. |
Number | Date | Country | |
---|---|---|---|
20130021447 A1 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
61509747 | Jul 2011 | US |